https://sesjournal.org/index.php/1/issue/feedSpectrum of Engineering Sciences2025-09-11T12:24:24+05:00Dr. Muhammad Aliinfo.chiefeditor@yahoo.comOpen Journal Systems<p>Spectrum of Engineering Sciences (SEC), is a refereed research platform with a strong international focus. It is open-access, online, editorial-reviewed (blind), peer-reviewed (double-blind), and Quarterly Research journal (with continuous publications strategy).The main focus of the Spectrum of engineering sciences is to publish original research and review articles centred around the Computer science and Engineering Science and Lunched by the SOCIOLOGY EDUCATIONAL NEXUS RESEARCH INSTITUTE (SME-PV).This international focus is designed to attract authors and readers from diverse backgrounds. At the Ses, we believe that including multiple academic disciplines helps pool the knowledge from two or more fields of study to handle better-suited problems by finding solutions established on new understandings.</p>https://sesjournal.org/index.php/1/article/view/994AI BASED DEEPFAKE AUDIO DETECTION – A REVIEW2025-09-10T23:00:08+05:00Muhammad Aleem behishatktk2000@gmail.comSaqib Riazbehishatktk2000@gmail.comMuhammad Tayan Azizbehishatktk2000@gmail.comEngr. Dr Abdul Rehman Chishtibehishatktk2000@gmail.com<p><em>The rapid rise of deepfake audio presents a dual reality: enabling innovative applications like voice assistants and accessibility tools, while also posing severe risks to security and trust through fraud and misinformation. Modern systems can clone a voice from just a few seconds of audio, making it hard to distinguish real from synthetic speech. This study investigates machine learning methods for detecting deepfake audio, using features such as MFCCs and spectrograms with classifiers including Random Forests and CNNs on datasets like FoR and ASVspoof. Results show that combining optimized features with advanced models significantly boosts detection accuracy. We also address ongoing challenges like limited data diversity, adversarial attacks, and real-world scalability, alongside ethical concerns. Our goal is to contribute to the development of reliable and practical detection systems.</em></p> <p><strong>Keywords</strong><strong> : </strong>Deepfake audio, voice cloning, audio forensics, machine learning, MFCC, spectrogram, deepfake detection, Random Forest, convolutional neural networks (CNN).</p>2025-09-10T00:00:00+05:00Copyright (c) 2025 Spectrum of Engineering Scienceshttps://sesjournal.org/index.php/1/article/view/973REINFORCEMENT LEARNING-BASED TRAFFIC SIGNAL OPTIMIZATION FOR SMART CITIES2025-09-08T18:02:42+05:00Dr. Khurram Zeeshan Haider behishatktk2000@gmail.comMubashir Iqbalbehishatktk2000@gmail.comAmmad Hussain*behishatktk2000@gmail.comHina Shoaibbehishatktk2000@gmail.comNauman Zafar Hashmi behishatktk2000@gmail.comKainat Rizwanbehishatktk2000@gmail.com<p><em>Modern cities are not providing enough infrastructure, which has further aggravated congestion, commute, and environmental pollution. The traditional traffic signal systems (both fixed-time and actuated controls) lack the ability to adjust to real-time changes, and thus they have fewer choices available to them to deal with traffic. Recent developments in artificial intelligence, especially reinforcement learning (RL), yield new possibilities of adaptive data-driven traffic control. In this paper, the author examines how deep reinforcement learning (DRL) frameworks can be used to optimize traffic lights in smart cities. RL-based controllers are compared to SUMO-based simulations of fixed-time, actuated and SURTRAC-like scheduling systems at single intersections, corridors and grid networks. Results show that RL reduces delays by up to 45 per cent., reduces queues by over 40 meters, enhances throughput by 28 per cent., and reduces CO2 emissions by 19 per cent when compared with the baseline method. Further, when there is incidence, RL is stable and returns flow within 5-10 minutes, which is regarded as superior to conventional systems. The results highlight the scalability, sustainability, and resilience of traffic management provided by RL. The paper is concluded by advising on hybrid deployment approaches, their relation to connected vehicle data, and further research on equity, interpretability, and actual pilot implementations to build intelligent transportation in urban areas.</em></p> <p><strong>Keywords</strong><strong> </strong>reinforcement learning, traffic signal control, deep reinforcement learning, SUMO, smart cities, sustainable mobility</p>2025-09-08T00:00:00+05:00Copyright (c) 2025 Spectrum of Engineering Scienceshttps://sesjournal.org/index.php/1/article/view/949DESIGN OF A COMPACT, COST-EFFECTIVE SHELL AND TUBE HEAT EXCHANGER WITH IMPROVED EFFICIENCY2025-09-03T13:30:07+05:00Syed Zeeshan Shahmahboobmails@gmail.com<p><em>Heat exchangers were widely used in industrial applications but continued to face persistent challenges related to high fabrication costs, large size, fouling, and limited thermal efficiency. Conventional shell and tube heat exchangers, although robust and suitable for high-pressure and high-temperature environments, often required expensive materials and complex manufacturing processes, while design modifications typically resulted in trade-offs between performance and hydraulic resistance. Addressing these limitations was essential for enhancing energy efficiency, reducing operational costs, and promoting more compact, cost-effective designs suitable for small- and medium-scale industries. The present study aimed to design, fabricate, and evaluate a compact, low-cost shell and tube heat exchanger with structural improvements to increase efficiency. The exchanger was constructed from mild steel with spiral baffles and tested using chilled water as the cold stream and hot water as the heating medium. Thermocouples, pressure gauges, and flow meters were employed to record inlet and outlet temperatures, pressure drops, and flow rates, while thermal performance was analyzed using heat transfer rate (Q), log mean temperature difference (LMTD), overall heat transfer coefficient (U), and effectiveness (ε). Statistical reliability was ensured by triplicate measurements and mean value analysis. Results showed that the inclusion of a spiral rod significantly improved turbulence, leading to higher heat transfer efficiency (62.4% with spiral rod vs. 42.7% without; p < 0.05), and increased the heat transfer rate from 7.44 kW to 16.1 kW despite similar flow regimes. The compact exchanger achieved a performance comparable to larger industrial designs while maintaining a surface area of only 2.17 m² and a material cost of 56,000 units. In conclusion, the study demonstrated that spiral-rod integration effectively enhanced thermal performance, offering a feasible pathway toward more efficient, affordable, and sustainable shell and tube heat exchanger technology.</em></p>2025-09-03T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/954CLASSIFYING DEFECTIVE AND NON-DEFECTIVE PRODUCTS USING LDA AND VOTING CLASSIFIER IN QUALITY CONTROL PROCESSES2025-09-03T17:17:41+05:00admin adminadnanfayaz6613@gmail.comSafdar Ameen Khanbehishatktk2000@gmail.comRaja Jalees-ul-Hussen Khanbehishatktk2000@gmail.comHina shoaibbehishatktk2000@gmail.com<p><em>This research explores the application of machine learning techniques in classifying defective and non-defective products within a quality control process. Two models, Linear Discriminant Analysis (LDA) and a Voting Classifier, were evaluated for their performance in identifying defective items. The study utilized a wine quality dataset, where the 'quality' attribute was binarized into defective and non-defective classes. The models were assessed based on their classification accuracy, precision, recall, and other evaluation metrics. The LDA model achieved a test set accuracy of 72.71%, with balanced precision and recall values for both classes. It demonstrated a precision of 0.68 and a recall of 0.74 for the non-defective class (Class 0) and a precision of 0.78 and a recall of 0.72 for the defective class (Class 1). These results highlight the model’s ability to handle the classification task with reasonable accuracy and consistency. In comparison, the Voting Classifier significantly outperformed LDA on the test set, achieving an accuracy of 81.04%. It showed a higher precision (0.79) and recall (0.77) for the non-defective class and an impressive precision (0.82) and recall (0.84) for the defective class. These results underline the robustness of the Voting Classifier in handling complex classification tasks with improved reliability and performance. The findings indicate that while LDA provides baseline performance, the Voting Classifier demonstrates superior capabilities in defect detection, making it a better candidate for quality control applications. This study emphasizes the importance of model selection in optimizing testing outcomes for industrial processes.</em></p> <p><strong>Keywords</strong><strong> </strong>Key words: Defective Classification, Machine Learning, Linear Discriminant Analysis (LDA), Voting Classifier.</p> <p> </p>2025-09-03T00:00:00+05:00Copyright (c) 2025 Spectrum of Engineering Scienceshttps://sesjournal.org/index.php/1/article/view/952THE EMOTIONAL COMPASS: NAVIGATING THE REALM OF HUMAN EMOTIONS USING EEG AND PHYSIOLOGICAL SIGNALS2025-09-03T14:43:09+05:00Muhammad Hamza Saleemmahboobmails@gmail.comSaira Gillanimahboobmails@gmail.comKhoula Saleemmahboobmails@gmail.comDr. Ghulam Mustafamahboobmails@gmail.com Muhammad Zulkifl Hasanmahboobmails@gmail.comMuhammad Zunnurain Hussainmahboobmails@gmail.com<p><em>Understanding emotions through EEG signals plays a crucial role in fields like healthcare, human-computer interaction, and affective neuroscience. It classified Disgust, Fear, Happy, Neutral, and Sad as five emotional states, combining eye-blinking features with differential entropy attributes extracted from EEG across the SEED-V dataset. The features were standardized via padding, while the EMOTION labels were encoded by one-hot encoder to enhance the capability of model training. The advanced DL architectures CNN, LSTM, and TFT were considered. The linear regression model in this study is that of LSTM; CNN is the secondary modeling method, while TFT has relatively less performance but excels in interpretability. The results suggest that LSTM outperformed both CNN and TFT, reaching the maximum accuracy of 80%. There was a difference of about 5 to 10% from that of CNN at 78% to that of TFT at 60%. Also, the macro average F1-score of LSTM improves upon that of CNN, 0.77, and of TFT, 0.58, indicating better performance for LSTM in the capture of long-term dependencies in sequential EEG data. While CNN is mostly concerned with spatial features, TFT has been challenged in dealing with fine temporal dependence. Therefore, LSTM incurs obstacles of increased computation complexity, longer training time, and potential overfitting when it comes to smaller datasets. LSTM-based models further inherit a limitation concerning their non-interpretability owing to a detailed internal structure. Even with these setbacks, the findings highlight the importance of physiologic (EEG) and behavioral (eye-blinking) feature integration for solid emotion recognition, paving the road for advancements in mental health care, brain-computer interface, and adaptive human-computer interaction.</em></p>2025-09-03T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/953EMPOWERMENT OF ARTIFICIAL INTELLIGENCE (AI) IN PREVENTING AND DETECTING RANSOMWARE: AN ANALYTICAL REVIEW2025-09-03T14:53:57+05:00Amir Mohammad Delshadimahboobmails@gmail.comObaid Ullahmahboobmails@gmail.comYounus Khanmahboobmails@gmail.comMuhammad Waleed Iqbalmahboobmails@gmail.comHafiz Abdul Basit Muhammadmahboobmails@gmail.comKhalid Hamidmahboobmails@gmail.comFakhar Abbasmahboobmails@gmail.com Muhammad Ibrarmahboobmails@gmail.com<p><em>Ransomware is an emerging cyber threat that requires innovative and flexible solutions. Using recent advances in machine learning, deep learning, and explainable AI, this study explores the potential of artificial intelligence (AI) to identify and stop ransomware. AI significantly improves detection and response speed in networks, the Internet of Things (IoT), and mobile devices, according to previous and ongoing studies. Explainability and transparency are becoming increasingly important, particularly in light of the growing challenges posed by generative AI. One of the primary research gaps, notwithstanding these developments, is the development of standardized, interpretable, real-time AI models that can adjust to various ransomware variations. By evaluating current approaches and suggesting paths toward a more scalable and efficient AI-based defense system, this study closes that gap.</em></p>2025-09-03T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/955NAVIGATING CONTEMPORARY CHALLENGES OF SOFTWARE QUALITY ASSURANCE IN SOFTWARE TESTING2025-09-04T08:00:32+05:00Saim Masood Shaikhmahboobmails@gmail.comHumera Azammahboobmails@gmail.comMuhammad Zamin Ali Khanmahboobmails@gmail.comSaad Akbarmahboobmails@gmail.com<p><em>Software Quality Assurance Engineers (SQA Engineers) play a crucial role in ensuring the reliability and functionality of software systems through rigorous testing. However, they encounter various challenges that can hinder their effectiveness and accuracy in software testing. This research aims to identify and analyze the key challenges faced by SQA Engineers in their daily work. To discuss this issue, the study begins by discussing a detailed review of the software testing process and the essential role of SQA Engineers. It then systematically investigates the contemporary challenges SQA Engineers face in the software industry, recognizing their significant impact on overall performance and efficiency. The research delves into the root causes of these challenges, which often result in delays and pressure during software deployment. Multiple software houses are studied to gain a comprehensive understanding of challenge categories and their responsible parties. The research identifies and categorizes a range of challenges experienced by SQA Engineers, including issues related to communication, resource allocation, documentation, test automation, and evolving technology trends. These findings highlight the complex nature of these challenges and their impact on software development timelines and quality. In conclusion, this research emphasizes the critical importance of addressing these contemporary challenges faced by SQA Engineers. These contemporary challenges not only affect single projects but also have comprehensive implications for the entire software industry. By understanding and mitigating these challenges, we can improve the efficiency and effectiveness of software testing processes, leading to enhanced software quality and timely deployments.</em></p>2025-09-04T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/959EFFECTS OF CORROSION ON THE MECHANICAL PROPERTIES OF STEEL BARS2025-09-04T10:20:56+05:00Syed Amir Sohailmahboobmails@gmail.comSikandar Bilal Khattakmahboobmails@gmail.comMuhammad Arshadmahboobmails@gmail.comQuaid Jamalmahboobmails@gmail.comMansoor Mustafamahboobmails@gmail.com<p><em>Corrosion is the basic problem of deteriorating all steel structures due to environmental conditions and chemical reactions. This leads structures to failures, which have significant economic losses and negative environmental effects. This study aims to investigate the effects of corrosion on the mechanical properties of steel bars. A comparative experimental design is used, which has seven steps, including problem definition, parameter identification, sample preparation, data collection, data analysis, results, and conclusion. This experimental design analyzed data by comparing the mechanical properties of corroded steel bars to international standards such as the American Society of Testing and Materials (ASTM), the International Organization for Standardization (ISO), and Pakistani Standards (PS). Moreover, calculating the coefficient of variation of all parameters helps in data consistency and reliability. The results showed that corrosion has severe effects on the mechanical properties of steel by leading steel bars to degradation and increasing the risk of failure. Due to corrosion, the yield strength is decreased by 17.62%, the ultimate strength is reduced by 20.19%, the tensile strength is decreased by 27.52%, the percentage elongation is decreased by 15.76%, and the nominal diameter is decreased by 6.62%. This research highlights how mechanical properties are degraded due to corrosion, which undermines the structural integrity, safety, and durability of steel-reinforced structures. </em></p>2025-09-04T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/962LEVERAGING DEEP LEARNING FOR EARLY DETECTION OF DIABETES MELLITUS2025-09-08T09:32:15+05:00Myra Ashraf editorshnakhat@gmail.comMaria Bibieditorshnakhat@gmail.comMuhammad Tanveer Meeraneditorshnakhat@gmail.com<p><em>Diabetes Mellitus is a chronic and life-threatening disease that poses a significant global health challenge. While machine learning (ML) models have been widely adopted for predicting diabetes incidence, a critical research gap remains: most models function as "black boxes," prioritizing overall prediction accuracy over the identification and interpretation of the key underlying risk factors. This study addresses this gap by developing and evaluating an ensemble machine learning framework designed not only for high-accuracy detection but also for the critical task of identifying and ranking the most significant attributes contributing to diabetes onset. Utilizing a dataset of clinical and demographic features, we trained and tested multiple models, including XGBoost, Random Forest, and Support Vector Machines. Our proposed ensemble model achieved a superior accuracy of 87% and demonstrated high precision, outperforming existing benchmark models. Furthermore, through feature importance analysis using SHAP (SHapley Additive exPlanations) values, we identified 2-3 top factors, e.g., glucose level, BMI, age as the most salient predictors. The findings provide actionable insights for healthcare providers, enabling targeted prevention strategies and data-driven interventions for at-risk populations, thereby moving beyond mere prediction towards actionable, preventative healthcare.</em></p> <p><strong>Keywords</strong></p> <p>Support Vector Machine, Random Forest, Diabetics, Artificial Intelligence</p>2025-09-08T00:00:00+05:00Copyright (c) 2025 Spectrum of Engineering Scienceshttps://sesjournal.org/index.php/1/article/view/965IMPROVING THE ACCURACY OF IMBALANCED DATASET USING K-MEANS CLUSTERING2025-09-08T11:52:30+05:00Adnan Saeedmahboobmails@gmail.comDr. Anwar Ali Sanjranimahboobmails@gmail.comSyed Khalid Shah Bukharimahboobmails@gmail.comShabeer Ahmadmahboobmails@gmail.com<p><em>Class imbalance represents a significant obstacle in predictive modelling, frequently producing biased models that demonstrate poor performance on minority classes. Conventional classification methods often exhibit a tendency to favour the majority class, leading to suboptimal recall and precision for the critical minority outcomes. To address this issue, the current paper suggests the state-of-the-art prediction approaches that combine the results of the unsupervised K-Means clustering with the supervised classification algorithms. The key assumption is to capture some underlying group-level behavioral patterns by clustering and then provide the resulting cluster labels as auxiliary features in the classification pipeline. The aim of this hybrid approach is to augment the feature space, enhance model sensitivity to the minority class and finally improve overall model predictive power. Experimental testing conducted on two actual churn datasets of customers showed that models trained with cluster labels continually performed better on all important performance metrics. Most interestingly, there was a significant increase in performance when the K-Means clustering algorithm was used together with the K-Nearest Neighbours (KNN) classifier than when either of the two were used separately as the base-line models. The given framework is an effective and feasible plan to eliminate the challenge of data imbalance on customer churn prediction and other similar classification systems.</em></p>2025-09-08T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/969HYBRID ARIMA AND LSTM DEEP LEARNING MODELS EMPOWERING AND ENHANCING FORECAST ACCURACY IN SALES2025-09-08T15:52:54+05:00Halima Fatimamahboobmails@gmail.comBisma Tahirmahboobmails@gmail.comKhalid Hamidmahboobmails@gmail.com<p><em>The business needs to properly forecast the sales that can help in improving inventory management and in future and visionary planning. There is a classical forecasting technique, like the ARIMA (Autoregressive Integrated Moving Average (ARIMA)) model, which is effectively applied in forecasting the linear tendencies of the data, as well as the seasonality (periodic fluctuation). Nevertheless, it may have challenges in dealing with non-linear, complicated, unsteady and uneven data that define the current sales environment. Contrarily, deep learning networks such as the Long Short-term Memory (LSTM) network are very powerful in learning and predicting non-linear, long-term dependencies in data. The absence of a globally optimal model is a major gap within the existing literature with each model having its niche and operating within a specific type of data. The proposed and tested hybrid model of ARIMA and LSTM deep learning that enhances the accuracy of sales forecasting is the proposed research choice. The hybrid model is proposed to use the merits of the two methodologies whereby the ARIMA will deal with the linear bits of the time series, and LSTM models will deal with the non-linear bits.. With the two of these potent methods, we are hoping that a more consistent and adaptable forecasting solution can be achieved. Performance of the model will be compared to standalone ARIMA and LSTM models by different metrics, and the hypothesis will be that the hybrid model will make much lower forecast errors. To optimize business activities and minimize financial risk, the accuracy of the forecasting should be enhanced, and this project provides a distinctive and highly effective strategy in that regard.</em></p>2025-09-08T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/974THROUGHPUT EFFICIENCY ENHANCEMENT IN INTERNET OF HEALTH THINGS2025-09-08T21:36:54+05:00Rahat Ali Khaneditorshnakhat@gmail.comShahzad Memoneditorshnakhat@gmail.com Naila Sher Afzaleditorshnakhat@gmail.com<p><em>Internet of Health Things is an emerging technology dealing for the betterment of human health care. It is based on the usage of internet as its core technology using tiny machines. These tiny machine are enough capable of providing uninterruptable monitoring for parameters. The main obstacle of Internet of Health Things is the size of the sensors. The tiny size of sensors make them the best solution for wearing them on the body for longer span of time but have disadvantage that the size of the battery also has be of small size. In this paper a sensors have been placed in a calculated way such that they are in a better position to send recorded data to the sink. Based on the energy levels of the sensors and distance from the sink, cost function has been proposed to be used. This cost function is calculated at the end of every single round of the simulation. This help in balancing energy saving for the Internet ofMedical Thing Routing. </em></p>2025-09-06T00:00:00+05:00Copyright (c) 2025 Spectrum of Engineering Scienceshttps://sesjournal.org/index.php/1/article/view/978AI POWERED HIGH RESOLUTIONS ULTRASOUND DEEP LEARNING APPROACHES FOR NEXT GENERATION MEDICAL IMAGING2025-09-09T14:19:34+05:00Muhammad Usmanmahboobmails@gmail.comMuzammil Ahmadmahboobmails@gmail.comAbdullah Faizmahboobmails@gmail.comMuhammad Ans Yaqoobmahboobmails@gmail.comSumaiya Fazalmahboobmails@gmail.comBilal Faizmahboobmails@gmail.comMuhammad zahoormahboobmails@gmail.com<p><em>High-resolution ultrasound (HRUS) is increasingly recognized as a critical imaging modality due to its safety, affordability, and real-time diagnostic capability. Although its applications have expanded beyond obstetrics into cardiology, oncology, and emergency medicine, conventional ultrasound remains limited by low spatial resolution, operator dependency, and image artifacts. This study presents the development of an improved HRUS framework integrating deep learning-based image processing. Convolutional neural networks (CNNs), generative adversarial networks (GANs), and deep super-resolution models were implemented to enhance resolution, suppress artifacts, and reduce noise. Benchmark ultrasound datasets were used for model training and validation, with performance evaluated against conventional image reconstruction techniques. The proposed deep learning–enhanced HRUS system demonstrated significant improvements in image quality, with up to 35% enhancement in spatial resolution and 40% reduction in noise compared to standard methods. Furthermore, the system reduced operator dependence by providing automated image optimization, enabling more consistent diagnostic outcomes. Integrating deep learning into HRUS offers a transformative approach to medical imaging, providing higher diagnostic accuracy, improved visualization of subtle anatomical details, and broader clinical applicability. This synergy between HRUS and deep learning has the potential to establish ultrasound as a more reliable, versatile, and widely adopted diagnostic tool across multiple medical specialties.</em></p>2025-09-09T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/979 RESOURCE‑AWARE MACHINE LEARNING FOR CLOUD–EDGE TASK ALLOCATION: A SMALL‑SCALE SYSTEM AND FEDERATED‑LEARNING IMPLICATIONS2025-09-09T14:50:41+05:00Sadaqat Hussainmahboobmails@gmail.com<p><em>Heterogeneity in computation and communication across cloud and edge platforms presents a significant obstacle for task allocation. Heuristics that greedily assign tasks to the fastest worker can overload high‑capability nodes while leaving slower nodes idle. This paper presents <strong>Cloud</strong><strong>‑Assisted Resource Allocation Using Machine Learning</strong>, a reproducible prototype that learns to allocate “cloudlets” to edge workers. The system comprises a synthetic data seeder, a cloud module that trains a decision‑tree classifier to predict the best worker for each cloudlet, and a master scheduler that uses the trained model to dispatch tasks subject to compute (MIPS) and network (bandwidth) constraints. We benchmark the machine‑learning (ML) scheduler against a greedy baseline and analyze per‑worker durations and makespan. On a representative run with three workers (W1=2.3 MIPS, W2=2.6 MIPS, W3=3.0 MIPS) and heterogeneous links, the ML scheduler completes the workload in 988 s, whereas the greedy baseline requires 1 020 s, a ∼3.2 % reduction in makespan and a substantial reduction in W3 overload. Averaged over forty runs, the ML scheduler reduces W3’s execution time from 1 050.5 s to 982.5 s, confirming consistent load balancing. Beyond edge task allocation, we draw parallels with federated learning (FL). The task → worker mapping resembles client → round selection in FL, and resource‑aware scheduling can mitigate stragglers and reduce time‑to‑accuracy. We discuss how the proposed prototype could be extended with systems such as Kubernetes Horizontal Pod Autoscaler<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#:~:text=The%20Horizontal%20Pod%20Autoscaler%20is,autoscaling%2Fv1">[1]</a> and KubeEdge<a href="https://kubernetes.io/blog/2019/03/19/kubeedge-k8s-based-edge-intro/#:~:text=strong%20drive%20to%20build%20better,edge%20modules%20are%20open%20sourced">[2]</a>, and we outline future work on integrating federated‑learning frameworks like Flower</em><a href="https://arxiv.org/abs/2007.14390#:~:text=,for%20FL%20study%20and%20development"><em>[3]</em></a></p>2025-09-09T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/985EVALUATING THERMAL COMFORT IN OFFICE BUILDINGS: A SUSTAINABILITY PERSPECTIVE2025-09-10T10:48:00+05:00Norheen Aminamahboobmails@gmail.comZakir Arfatmahboobmails@gmail.comHuda Riazmahboobmails@gmail.comHira Ishtiaqmahboobmails@gmail.comHabiba Mohsinmahboobmails@gmail.comAhmad Riazmahboobmails@gmail.com<p><em>This study combines measured indoor environmental factors with occupant perceptions to examine thermal comfort conditions in faculty and staff offices at the University of Engineering and Technology (UET) in Lahore, Pakistan. 130 respondents from various departments were given a structured questionnaire that included the ASHRAE 7-point thermal feeling scale and categorical evaluations of productivity and health. Physical measures included room temperature, air velocity, and relative humidity. Despite typical interior temperatures (28.6°C) surpassing ASHRAE 55 summer standards, the results indicate that more than 80% of respondents assessed their thermal environment as satisfactory, indicating adaptive tolerance in the local climatic and cultural context. There were very minor effects on productivity, mostly in the late afternoon. The study's unique contribution is the way it combines quantitative building performance data with qualitative user feedback in the context of higher education in developing nations. This results in occupant-centered, evidence-based recommendations for campus building design, such as the necessity of centralized cooling and better ventilation throughout academic spaces.</em></p>2025-09-10T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/986TOWARDS NEXT-GENERATION AUTOMATION: DATA-DRIVEN SYNERGIES OF AI AND ROBOTICS THROUGH DATA ENGINEERING AND DATA SCIENCE2025-09-10T11:05:21+05:00Muhammad Waleed Iqbalmahboobmails@gmail.comUmair Ashfaqmahboobmails@gmail.comAftab Ahmed Soomromahboobmails@gmail.comObaidullahmahboobmails@gmail.comYounus khanmahboobmails@gmail.comShahzaib Khanmahboobmails@gmail.comMuhammad Ibrarmahboobmails@gmail.com<p><em>The convergence of artificial intelligence (AI) and robotics is driving a paradigm shift in how automation systems are conceptualized, designed, and deployed across diverse industries. While robotics provides the physical execution and AI offers adaptive intelligence, the success of next-generation intelligent automation depends heavily on the strength of its underlying data ecosystem. This paper argues that the integration of data engineering and data science forms the critical foundation upon which scalable, adaptive, and trustworthy automation can be achieved. Data engineering ensures the creation of robust pipelines for data collection, cleansing, integration, governance, and security, enabling high-quality and real-time data availability. In parallel, data science leverages these curated datasets to generate insights, optimize control strategies, and empower AI models to support robotic decision-making in complex and uncertain environments. This research present a comprehensive framework that illustrates how data engineering and data science synergistically interact to enhance AI- and robotics-driven automation. This framework consists of three interconnected layers: (i) <strong>data infrastructure and engineering</strong> for real-time ingestion, standardization, and governance of heterogeneous data; (ii) <strong>AI and data science modules</strong> for predictive modeling, anomaly detection, and reinforcement learning-driven optimization; and (iii) <strong>robotic intelligence systems</strong> that transform predictive insights into adaptive action, ensuring autonomy, precision, and scalability. Experimental simulations and sector-specific case studies spanning manufacturing assembly lines, healthcare robotics for precision surgery and rehabilitation, and logistics systems for smart supply chains demonstrate the measurable improvements in efficiency, fault tolerance, adaptability, and decision-making enabled by the proposed approach. The findings underscore that the future of automation cannot rely solely on advanced robotics or AI algorithms in isolation. Instead, it requires a tightly integrated, data-driven architecture that unites data engineering and data science to achieve resilience, scalability, compliance with regulatory frameworks, and explainability of system outputs. By articulating this synergy, the study contributes a novel perspective on how next-generation automation systems can be systematically designed to balance technical innovation with real-world operational requirements. Ultimately, this research advances the discourse on intelligent automation by proposing a holistic paradigm that redefines how AI, robotics, and data ecosystems converge to build scalable, trustworthy, and future-ready automation infrastructures.</em></p>2025-09-10T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/987 IOT-BASED HEALTH MONITORING SYSTEM USING MYSIGNALS AND LORA FOR REMOTE PATIENT CARE2025-09-10T11:16:57+05:00Sofia Shafiqmahboobmails@gmail.comIrsha Qureshimahboobmails@gmail.comNoor-ul-Hudamahboobmails@gmail.comShahr Banomahboobmails@gmail.comAmrozia Sundasmahboobmails@gmail.com Sabahat Tasneemmahboobmails@gmail.com<p><em>The Internet of Things (IoT) is revolutionizing healthcare by enabling real-time monitoring, diagnosis, and data collection from patients using intelligent sensors and devices. This paper presents an IoT-based health monitoring system utilizing My Signals hardware integrated with LoRa wireless communication to monitor vital signs such as ECG, body temperature, heart rate, and oxygen saturation. The proposed system is designed for remote and home-based care, addressing key challenges in rural healthcare accessibility, data accuracy, and real-time communication. The implementation results show the successful integration of sensors and LoRa communication, offering a cost-effective and scalable solution for improving patient outcomes.</em></p>2025-09-10T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/992AI IN SOCIAL CHANGE IN PAKISTAN: CHALLENGES, OPPORTUNITIES AND WAY FORWARD2025-09-10T17:16:34+05:00Dr. Siraj Bashir Balochmahboobmails@gmail.com<p><em>Artificial intelligence (AI) is considered a powerful tool for social change in different sectors in the world: it has emerged in governance, education, human rights movements, and even economic growth. Pakistan is a traditional society. The potential of AI here lies in changing established norms, empowering youth, and reducing economic inequality. Nevertheless, there is still immense distrust regarding the country's preparedness to utilize AI for any serious societal development. This paper examined the role of AI as a social change tool in Pakistan. The paper analyzed Pakistan's preparedness in terms of its institutional readiness, infrastructure, and policy to meet the challenges of this digital technology. The research has two research questions. The first question is how can AI be used as a tool of social change in Pakistan? And second, can Pakistan afford to introduce AI into the whole of their socio-political economy? The study used mixed methods, such as quantitative surveys of students, government officials, and AI experts, to find out how ready Pakistan is for an AI regulatory framework and digital infrastructure. This research revealed that Pakistan is not prepared for AI yet, but it has the potential to bring social change via education, entrepreneurship, or digital activism. Regarding AI, Pakistan faces numerous challenges, including inadequate AI governance, inadequate infrastructure, and a reluctance to adopt technology for social transformation. However, AI initiatives can identify socio-economic issues like youth unemployment, social justice, gender inequality, and political awareness. The research study suggested that comprehensive AI policies and digital and public investments mostly for the youth so that they can be connected with the AI to transform their socio-economic, cultural, and political position in Pakistan.</em></p>2025-09-10T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/996EXPLORING INNOVATIVE RENTAL MODELS IN SHARED HOUSING: CO-LIVING AND BEYOND2025-09-11T12:17:49+05:00Aman Ullahmahboobmails@gmail.comMir Wali Shahmahboobmails@gmail.comSyeda Arfa Quddusimahboobmails@gmail.comDemet Irkli Eryildizmahboobmails@gmail.com<p><em>The growing need for affordable, flexible and socially enriching housing in urban areas suggested an innovative rental model named co-living housing. This study aimed to explore the role of co-living as a potential solution to urban housing shortages, rising rental costs, and social isolation in highly populated cities. Co-living models are characterised by shared facilities, professional management, and community-focused living environments with an aim to balance privacy and social interaction. This study focuses on case studies from Norway, the UK, Hong Kong, Malaysia, and the United States. This research investigates the socio-economic, technological, and cultural factors driving the adoption of co-living spaces in urban areas. It also examines the challenges faced by residents and stakeholders, including issues related to privacy, legal regulation, and long-term sustainability. The findings offer policy recommendations and design insights for stakeholders such as urban planners, developers, and governments seeking to support innovative, inclusive, and environmentally sustainable housing alternatives.</em></p>2025-09-11T00:00:00+05:00Copyright (c) 2025 https://sesjournal.org/index.php/1/article/view/997AUTOMATING AGILE DECISIONS: A COMPARATIVE ANALYSIS OF AI-POWERED REQUIREMENT PRIORITIZATION TOOLS2025-09-11T12:24:24+05:00Shawaiz Arifmahboobmails@gmail.comNeha Ijazmahboobmails@gmail.comMuhammad Faisalmahboobmails@gmail.comMuhammad Ahmedmahboobmails@gmail.comAmna Khanmahboobmails@gmail.com<p><em>Requirement prioritization is a cornerstone of Agile software engineering, yet traditional methods such as MoSCoW and the Analytical Hierarchy Process (AHP) often struggle with scalability, subjectivity, and stakeholder bias. Recent advances in Artificial Intelligence (AI), including natural language processing (NLP), machine learning (ML), and sentiment analysis, offer opportunities to automate and enhance this process.</em></p> <p><em>This study presents a comparative analysis of prominent AI-powered requirement prioritization tools, evaluating them across four dimensions: AI capabilities, cost structures, integration with Agile ecosystems, and usability. Data was drawn from product documentation, user reviews, academic literature, and expert insights. Tools such as Airfocus, Craft.io, ClickUp AI, ReqSuite RM, Kanoah Tests, Zepel, and Aha! Roadmaps were systematically assessed.</em></p> <p><em>Findings reveal that no single tool is universally optimal; instead, suitability depends on organizational scale, resource availability, and integration needs. While some platforms excel in predictive analytics and compliance, others emphasize affordability or ease of onboarding. However, gaps remain in transparency and explainability, limiting stakeholder trust in AI-driven outputs.</em></p> <p><em>This paper contributes by offering both a practical guide for Agile teams and a research-oriented discussion of adoption barriers, ethical considerations, and future directions. As AI-driven prioritization matures, the integration of explainable AI (XAI), hybrid human-AI decision-making, and emerging techniques such as large language models (LLMs) will be critical for advancing requirement engineering practices.</em></p> <p><em>Index Terms</em><em>— Requirement prioritization, Artificial intelligence (AI), Agile software engineering, Requirements engineering (RE), Natural language processing (NLP), Explainable AI (XAI), Tool comparison, Machine learning (ML), Human-AI collaboration, Cost-benefit analysis.</em></p>2025-09-11T00:00:00+05:00Copyright (c) 2025