Computation doi: 10.3390/computation13080187
Authors: Olena Kiseleva Sergiy Yakovlev Olga Prytomanova Oleksandr Kuzenkov
This study presents an extended approach to compartmental modeling of infectious disease spread, focusing on regional heterogeneity within affected areas. Using classical SIS, SIR, and SEIR frameworks, we simulate the dynamics of COVID-19 across two major regions of Ukraine—Dnipropetrovsk and Kharkiv—during the period 2020–2024. The proposed mathematical model incorporates regionally distributed subpopulations and applies a system of differential equations solved using the classical fourth-order Runge–Kutta method. The simulations are validated against real-world epidemiological data from national and international sources. The SEIR model demonstrated superior performance, achieving maximum relative errors of 4.81% and 5.60% in the Kharkiv and Dnipropetrovsk regions, respectively, outperforming the SIS and SIR models. Despite limited mobility and social contact data, the regionally adapted models achieved acceptable accuracy for medium-term forecasting. This validates the practical applicability of extended compartmental models in public health planning, particularly in settings with constrained data availability. The results further support the use of these models for estimating critical epidemiological indicators such as infection peaks and hospital resource demands. The proposed framework offers a scalable and computationally efficient tool for regional epidemic forecasting, with potential applications to future outbreaks in geographically heterogeneous environments.
]]>Computation doi: 10.3390/computation13080186
Authors: Yeyubei Zhang Zhongyan Wang Zhanyi Ding Yexin Tian Jianglai Dai Xiaorui Shen Yunchong Liu Yuchen Cao
Social media platforms have emerged as valuable sources for mental health research, enabling the detection of conditions such as depression through analyses of user-generated posts. This manuscript offers practical, step-by-step guidance for applying machine learning and deep learning methods to mental health detection on social media. Key topics include strategies for handling heterogeneous and imbalanced datasets, advanced text preprocessing, robust model evaluation, and the use of appropriate metrics beyond accuracy. Real-world examples illustrate each stage of the process, and an emphasis is placed on transparency, reproducibility, and ethical best practices. While the present work focuses on text-based analysis, we discuss the limitations of this approach—including label inconsistency and a lack of clinical validation—and highlight the need for future research to integrate multimodal signals and gold-standard psychometric assessments. By sharing these frameworks and lessons, this manuscript aims to support the development of more reliable, generalizable, and ethically responsible models for mental health detection and early intervention.
]]>Computation doi: 10.3390/computation13080185
Authors: Aksultan Mukhanbet Beimbet Daribayev
Quantum machine learning (QML) has emerged as a promising approach for enhancing image classification by exploiting quantum computational principles such as superposition and entanglement. However, practical applications on complex datasets like CIFAR-100 remain limited due to the low expressivity of shallow circuits and challenges in circuit optimization. In this study, we propose HQCNN–REGA—a novel hybrid quantum–classical convolutional neural network architecture that integrates data re-uploading and genetic algorithm optimization for improved performance. The data re-uploading mechanism allows classical inputs to be encoded multiple times into quantum states, enhancing the model’s capacity to learn complex visual features. In parallel, a genetic algorithm is employed to evolve the quantum circuit architecture by optimizing gate sequences, entanglement patterns, and layer configurations. This combination enables automatic discovery of efficient parameterized quantum circuits without manual tuning. Experiments on the MNIST and CIFAR-100 datasets demonstrate state-of-the-art performance for quantum models, with HQCNN–REGA outperforming existing quantum neural networks and approaching the accuracy of advanced classical architectures. In particular, we compare our model with classical convolutional baselines such as ResNet-18 to validate its effectiveness in real-world image classification tasks. Our results demonstrate the feasibility of scalable, high-performing quantum–classical systems and offer a viable path toward practical deployment of QML in computer vision applications, especially on noisy intermediate-scale quantum (NISQ) hardware.
]]>Computation doi: 10.3390/computation13080184
Authors: Konstantin A. Tereshchenko Rustem T. Ismagilov Nikolai V. Ulitin Yana L. Lyulinskaya Alexander S. Novikov
Divinylisoprene rubber, a copolymer of butadiene and isoprene, is used as raw material for rubber technical products, combining isoprene rubber’s elasticity and butadiene rubber’s wear resistance. These properties depend quantitatively on the copolymer composition, which depends on the kinetics of its synthesis. This work aims to theoretically describe how the monomer mixture composition in the butadiene–isoprene copolymerization affects the activity of the TiCl4-Al(i-C4H9)3 catalytic system (expressed by active sites concentration) via kinetic modeling. This enables development of a reliable kinetic model for divinylisoprene rubber synthesis, predicting reaction rate, molecular weight, and composition, applicable to reactor design and process intensification. Active sites concentrations were calculated from experimental copolymerization rates and known chain propagation constants for various monomer compositions. Kinetic equations for active sites formation were based on mass-action law and Langmuir monomolecular adsorption theory. An analytical equation relating active sites concentration to monomer composition was derived, analyzed, and optimized with experimental data. The results show that monomer composition’s influence on active sites concentration is well described by a two-step kinetic model (physical adsorption followed by Ti–C bond formation), accounting for competitive adsorption: isoprene adsorbs more readily, while butadiene forms more stable active sites.
]]>Computation doi: 10.3390/computation13080183
Authors: Bingxian Wang Sunxiang Zhu
Main roads are usually equipped with traffic flow monitoring devices in the road network to record the traffic flow data of the main roads in real time. Three complex scenarios, i.e., Y-junctions, multi-lane merging, and signalized intersections, are considered in this paper by developing a novel modeling system that leverages only historical main-road data to reconstruct branch-road volumes and identify pivotal time points where instantaneous observations enable robust inference of period-aggregate traffic volumes. Four mathematical models (I–IV) are built using the data given in appendix, with performance quantified via error metrics (RMSE, MAE, MAPE) and stability indices (perturbation sensitivity index, structure similarity score). Finally, the significant traffic flow change points are further identified by the PELT algorithm.
]]>Computation doi: 10.3390/computation13080182
Authors: Zhengwei Hou Liang Li
In this paper, we propose the MUSWENO scheme, a novel mapped weighted essentially non-oscillatory (WENO) method that employs unequal-sized stencils, for solving nonlinear degenerate parabolic equations. The new mapping function and nonlinear weights are proposed to reduce the difference between the linear weights and nonlinear weights. Smaller numerical errors and fifth-order accuracy are obtained. Compared with traditional WENO schemes, this new scheme offers the advantage that linear weights can be any positive numbers on the condition that their summation is one, eliminating the need to handle cases with negative linear weights. Another advantage is that we can reconstruct a polynomial over the large stencil, while many classical high-order WENO reconstructions only reconstruct the values at the boundary points or discrete quadrature points. Extensive examples have verified the good representations of this scheme.
]]>Computation doi: 10.3390/computation13080181
Authors: Gábor Lakatos Bence Zoltán Vámos István Aupek Mátyás Andó
Shift organizations in automotive manufacturing often rely on manual task allocation, resulting in inefficiencies, human error, and increased workload for supervisors. This research introduces an automated solution using the Kuhn-Munkres algorithm, integrated with the Moodle learning management system, to optimize task assignments based on operator qualifications and task complexity. Simulations conducted with real industrial data demonstrate that the proposed method meets operational requirements, both logically and mathematically. The system improves the start of shifts by assigning simpler tasks initially, enhancing operator confidence and reducing the need for assistance. It also ensures that task assignments align with required training levels, improving quality and process reliability. For industrial practitioners, the approach provides a practical tool to reduce planning time, human error, and supervisory burden, while increasing shift productivity. From an academic perspective, the study contributes to applied operations research and workforce optimization, offering a replicable model grounded in real-world applications. The integration of algorithmic task allocation with training systems enables a more accurate matching of workforce capabilities to production demands. This study aims to support data-driven decision-making in shift management, with the potential to enhance operational efficiency and encourage timely start of work, thereby possibly contributing to smoother production flow and improved organizational performance.
]]>Computation doi: 10.3390/computation13080180
Authors: Riaz Muhammad Anam Gulzar Naveen Kosar Tariq Mahmood
Recent research on the design and synthesis of new and upgraded materials for secondary batteries is growing to fulfill future energy demands around the globe. Herein, by using DFT calculations, the thermodynamic and electrochemical properties of Na/Na+@C12 complexes and then halogens (X− = Br−, Cl−, and F−) as counter anions are studied for the enhancement of Na-ion battery cell voltage and overall performance. Isolated C12 nanorings showed a lower cell voltage (−1.32 V), which was significantly increased after adsorption with halide anions as counter anions. Adsorption of halides increased the Gibbs free energy, which in turn resulted in higher cell voltage. Cell voltage increased with the increasing electronegativity of the halide anion. The Gibbs free energy of Br−@C12 was −52.36 kcal·mol−1, corresponding to a desirable cell voltage of 2.27 V, making it suitable for use as an anode in sodium-ion batteries. The estimated cell voltage of these considered complexes ensures the effective use of these complexes in sodium-ion secondary batteries.
]]>Computation doi: 10.3390/computation13080179
Authors: Darío Fernando Guamán-Lozada Lenin Santiago Orozco Cantos Guido Patricio Santillán Lima Fabian Arias Arias
The accurate dosing of polyaluminum chloride (PAC) is essential for achieving effective coagulation in drinking water treatment, yet conventional methods such as jar tests are limited in their responsiveness and operational efficiency. This study proposes a hybrid modeling framework that integrates artificial neural networks (ANN) with genetic algorithms (GA) to optimize PAC dosage under variable raw water conditions. Operational data from 400 jar test experiments, collected between 2022 and 2024 at the Yanahurco water treatment plant (Ecuador), were used to train an ANN model capable of predicting six post-treatment water quality indicators, including turbidity, color, and pH. The ANN achieved excellent predictive accuracy (R2 > 0.95 for turbidity and color), supporting its use as a surrogate model within a GA-based optimization scheme. The genetic algorithm evaluated dosage strategies by minimizing treatment costs while enforcing compliance with national water quality standards. The results revealed a bimodal dosing pattern, favoring low PAC dosages (~4 ppm) during routine conditions and higher dosages (~12 ppm) when influent quality declined. Optimization yielded a 49% reduction in median chemical costs and improved color compliance from 52% to 63%, while maintaining pH compliance above 97%. Turbidity remained a challenge under some conditions, indicating the potential benefit of complementary coagulants. The proposed ANN–GA approach offers a scalable and adaptive solution for enhancing chemical dosing efficiency in water treatment operations.
]]>Computation doi: 10.3390/computation13080178
Authors: Itzel Luviano Soto Yajaira Concha-Sánchez Alfredo Raya
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and generated from 33 laboratory-prepared mixtures with varying concentrations of suspended clay particles. Red, green, and blue (RGB) images of each sample were captured under controlled optical conditions, and turbidity was measured using a calibrated turbidimeter. A transfer learning (TL) approach was applied using EfficientNet-B0, a deep yet computationally efficient CNN architecture. The model achieved an average accuracy of 99% across ten independent training runs, with minimal misclassifications. The use of a lightweight deep learning model, combined with a standardized image acquisition protocol, represents a novel and scalable alternative for rapid, low-cost water quality assessment in future environmental monitoring systems.
]]>Computation doi: 10.3390/computation13080177
Authors: Jorge Jorrin-Coz Mariko Nakano Hector Perez-Meana Leobardo Hernandez-Gonzalez
Speaker profiling systems are often evaluated on a single corpus, which complicates reliable comparison. We present a fully reproducible evaluation pipeline that trains Convolutional Neural Networks (CNNs) and Long-Short Term Memory (LSTM) models independently on three speech corpora representing distinct recording conditions—studio-quality TIMIT, crowdsourced Mozilla Common Voice, and in-the-wild VoxCeleb1. All models share the same architecture, optimizer, and data preprocessing; no corpus-specific hyperparameter tuning is applied. We perform a detailed preprocessing and feature extraction procedure, evaluating multiple configurations and validating their applicability and effectiveness in improving the obtained results. A feature analysis shows that Mel spectrograms benefit CNNs, whereas Mel Frequency Cepstral Coefficients (MFCCs) suit LSTMs, and that the optimal Mel-bin count grows with corpus Signal Noise Rate (SNR). With this fixed recipe, EfficientNet achieves 99.82% gender accuracy on Common Voice (+1.25 pp over the previous best) and 98.86% on VoxCeleb1 (+0.57 pp). MobileNet attains 99.86% age-group accuracy on Common Voice (+2.86 pp) and a 5.35-year MAE for age estimation on TIMIT using a lightweight configuration. The consistent, near-state-of-the-art results across three acoustically diverse datasets substantiate the robustness and versatility of the proposed pipeline. Code and pre-trained weights are released to facilitate downstream research.
]]>Computation doi: 10.3390/computation13080176
Authors: Gon?alo A. S. Dias Bruno M. M. Pereira
In this study, we study the trapping of linear water waves by infinite arrays of three-dimensional fixed periodic structures in a three-layer fluid. Each layer has an independent uniform velocity field with respect to the fixed ground in addition to the internal modes along the interfaces between layers. Dynamical stability between velocity shear and gravitational pull constrains the layer velocities to a neighbourhood of the diagonal U1=U2=U3 in velocity space. A non-linear spectral problem results from the variational formulation. This problem can be linearized, resulting in a geometric condition (from energy minimization) that ensures the existence of trapped modes within the limits set by stability. These modes are solutions living the discrete spectrum that do not radiate energy to infinity. Symmetries reduce the global problem to solutions in the first octant of the three-dimensional velocity space. Examples are shown of configurations of obstacles which satisfy the stability and geometric conditions, depending on the values of the layer velocities. The robustness of the result of the vertical column from previous studies is confirmed in the new configurations. This allows for comparison principles (Cavalieri’s principle, etc.) to be used in determining whether trapped modes are generated.
]]>Computation doi: 10.3390/computation13070175
Authors: Massimiliano Zaniboni
Electrical restitution (ER) is a determinant of cardiac repolarization stability and can be measured as steady action potential (AP) duration (APD) at different pacing rates—the so-called dynamic restitution (ERdyn) curve—or as APD changes after pre- or post-mature stimulations—the so-called standard restitution (ERs1s2) curve. Short-term AP memory (Ms) has been described as the slope difference between the ERdyn and ERs1s2 curves, and represents the information stored in repolarization dynamics due to previous pacing conditions. Although previous studies have shown its dependence on ion currents and calcium cycling, a systematic picture of these features is lacking. By means of simulations with a human ventricular AP model, I show that APD restitution can be described under randomly changing pacing conditions (ERrand) and Ms derived as the slope difference between ERdyn and ERrand. Thus measured, Ms values correlate with those measured using ERs1s2. I investigate the effect on Ms of modulating the conductance of ion channels involved in AP repolarization, and of abolishing intracellular calcium transient. I show that Ms is chiefly determined by ERdyn rather than ERrand, and that interventions that shorten/prolong APD tend to decrease/increase Ms.
]]>Computation doi: 10.3390/computation13070174
Authors: Md. Nazmuzzaman Khan Adibuzzaman Rahi Mohammad Al Hasan Sohel Anwar
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in a manner that is both environmentally sustainable and economically advantageous. Weed classification for autonomous agricultural robots is a challenging task for a single-camera-based system due to noise, vibration, and occlusion. To address this issue, we present a multi-camera-based system with decision-level sensor fusion to improve the limitations of a single-camera-based system in this paper. This study involves the utilization of a convolutional neural network (CNN) that was pre-trained on the ImageNet dataset. The CNN subsequently underwent re-training using a limited weed dataset to facilitate the classification of three distinct weed species: Xanthium strumarium (Common Cocklebur), Amaranthus retroflexus (Redroot Pigweed), and Ambrosia trifida (Giant Ragweed). These weed species are frequently encountered within corn fields. The test results showed that the re-trained VGG16 with a transfer-learning-based classifier exhibited acceptable accuracy (99% training, 97% validation, 94% testing accuracy) and inference time for weed classification from the video feed was suitable for real-time implementation. But the accuracy of CNN-based classification from video feed from a single camera was found to deteriorate due to noise, vibration, and partial occlusion of weeds. Test results from a single-camera video feed show that weed classification accuracy is not always accurate for the spray system of an agricultural robot (AgBot). To improve the accuracy of the weed classification system and to overcome the shortcomings of single-sensor-based classification from CNN, an improved Dempster–Shafer (DS)-based decision-level multi-sensor fusion algorithm was developed and implemented. The proposed algorithm offers improvement on the CNN-based weed classification when the weed is partially occluded. This algorithm can also detect if a sensor is faulty within an array of sensors and improves the overall classification accuracy by penalizing the evidence from a faulty sensor. Overall, the proposed fusion algorithm showed robust results in challenging scenarios, overcoming the limitations of a single-sensor-based system.
]]>Computation doi: 10.3390/computation13070173
Authors: Gines Molina-Abril Laura Calvet Angel A. Juan Daniel Riera
Small- and medium-sized enterprises (SMEs) face dynamic and competitive environments where resilience and data-driven decision-making are critical. Despite the potential benefits of artificial intelligence (AI), machine learning (ML), and optimization techniques, SMEs often struggle to adopt these tools due to high costs, limited training, and restricted hardware access. This study reviews how SMEs can employ heuristics, metaheuristics, ML, and hybrid approaches to support strategic decisions under uncertainty and resource constraints. Using bibliometric mapping with UMAP and BERTopic, 82 key works are identified and clustered into 11 thematic areas. From this, the study develops a practical framework for implementing and evaluating optimization strategies tailored to SMEs’ limitations. The results highlight critical application areas, adoption barriers, and success factors, showing that heuristics and hybrid methods are especially effective for multi-objective optimization with lower computational demands. The study also outlines research gaps and proposes future directions to foster digital transformation in SMEs. Unlike prior reviews focused on specific industries or methods, this work offers a cross-sectoral perspective, emphasizing how these technologies can strengthen SME resilience and strategic planning.
]]>Computation doi: 10.3390/computation13070172
Authors: Sotirios J. Trigkas Kanellos Toudas Ioannis Chasiotis
Modern financial practices introduce complex risks, which in turn force financial institutions to rely increasingly on computational risk analytics (CRA). The purpose of our research is to attempt to systematically explore the evolution and intellectual structure of CRA in banking using a detailed bibliometric analysis of the literature sourced from Web of Science from 2000 to 2025. A comprehensive search in the Web of Science (WoS) Core Collection yielded 1083 peer-reviewed publications, which we analyzed using analytical tools like VOSviewer 1.6.20 and Bibliometrix (Biblioshiny 5.0) so as to examine the dataset and uncover bibliometric characteristics like citation patterns, keyword occurrences, and thematic clustering. Our initial analysis results uncover the presence of key research clusters focusing on bankruptcy prediction, AI integration in financial services, and advanced deep learning applications. Furthermore, our findings note a transition of CRA from an emerging to an expanding domain, especially after 2019, with terms like machine learning (ML), artificial intelligence (AI), and deep learning (DL) being identified as prominent keywords and a recent shift towards blockchain, explainability, and financial stability being present. We believe that this study tries to address the need for an updated mapping of CRA, providing valuable insights for future academic inquiry and practical financial risk management applications.
]]>Computation doi: 10.3390/computation13070171
Authors: Xiangjun Cui Yongqiang Xing Guoqing Liu Hongyu Zhao Zhenhua Yang
Background: Epigenomic instability accelerates mutations in tumor suppressor genes and oncogenes, contributing to malignant transformation. Histone modifications, particularly methylation and acetylation, significantly influence tumor biology, with chromo-, bromo-, and Tudor domain-containing proteins mediating these changes. This study investigates how genes encoding these domain-containing proteins affect colorectal cancer (CRC) prognosis. Methods: Using CRC data from the GSE39582 and TCGA datasets, we identified domain-related genes via GeneCards and developed a prognostic signature using LASSO-COX regression. Patients were classified into high- and low-risk groups, and comparisons were made across survival, clinical features, immune cell infiltration, immunotherapy responses, and drug sensitivity predictions. Single-cell analysis assessed gene expression in different cell subsets. Results: Four domain-related genes (AKAP1, ORC1, CHAF1A, and UHRF2) were identified as a prognostic signature. Validation confirmed their prognostic value, with significant differences in survival, clinical features, immune patterns, and immunotherapy responses between the high- and low-risk groups. Drug sensitivity analysis revealed top candidates for CRC treatment. Single-cell analysis showed varied expression of these genes across cell subsets. Conclusions: This study presents a novel prognostic signature based on domain-related genes that can predict CRC severity and offer insights into immune dynamics, providing a promising tool for personalized risk assessment in CRC.
]]>Computation doi: 10.3390/computation13070170
Authors: W. A. Chapa Pamodani Wanniarachchi Ponniah Vajeeston Talal Rahman Dhayalan Velauthapillai
This study employs density functional theory (DFT) to investigate the electronic and optical properties of molybdenum (Mo) and chalcogen (S, Se, Te) co-doped anatase TiO2. Two co-doping configurations were examined: Model 1, where the dopants are adjacent, and Model 2, where the dopants are farther apart. The incorporation of Mo into anatase TiO2 resulted in a significant bandgap reduction, lowering it from 3.22 eV (pure TiO2) to range of 2.52–0.68 eV, depending on the specific doping model. The introduction of Mo-4d states below the conduction band led to a shift in the Fermi level from the top of the valence band to the bottom of the conduction band, confirming the n-type doping characteristics of Mo in TiO2. Chalcogen doping introduced isolated electronic states from Te-5p, S-3p, and Se-4p located above the valence band maximum, further reducing the bandgap. Among the examined configurations, Mo–S co-doping in Model 1 exhibited most optimal structural stability structure with the fewer impurity states, enhancing photocatalytic efficiency by reducing charge recombination. With the exception of Mo–Te co-doping, all co-doped systems demonstrated strong oxidation power under visible light, making Mo-S and Mo-Se co-doped TiO2 promising candidates for oxidation-driven photocatalysis. However, their limited reduction ability suggests they may be less suitable for water-splitting applications. The study also revealed that dopant positioning significantly influences charge transfer and optoelectronic properties. Model 1 favored localized electron density and weaker magnetization, while Model 2 exhibited delocalized charge density and stronger magnetization. These findings underscore the critical role of dopant arrangement in optimizing TiO2-based photocatalysts for solar energy applications.
]]>Computation doi: 10.3390/computation13070169
Authors: Ana Zeki?
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for molecular discovery. This review presents a structured overview of optimization techniques used in ML for computational chemistry, including gradient-based methods (e.g., SGD and Adam), probabilistic approaches (e.g., Monte Carlo sampling and Bayesian optimization), and spectral methods. We classify optimization targets into model parameter optimization, hyperparameter selection, and molecular optimization and analyze their application across supervised, unsupervised, and reinforcement learning frameworks. Additionally, we examine key challenges such as data scarcity, limited generalization, and computational cost, outlining how mathematical strategies like active learning, meta-learning, and hybrid physics-informed models can address these issues. By bridging optimization methodology with domain-specific challenges, this review highlights how tailored optimization strategies enhance the accuracy, efficiency, and scalability of ML models in computational chemistry.
]]>Computation doi: 10.3390/computation13070168
Authors: Daniel Miler Matija Hoi? Rudolf Tomi? Andrej Joki? Robert Ma?ovi?
In this study, a multi-objective optimization procedure with embedded topology optimization was presented. The procedure simultaneously optimizes the spatial arrangement and topology of bodies in a multi-body system. The multi-objective algorithm determines the locations of supports, joints, active loads, reactions, and load magnitudes, which serve as inputs for the topology optimization of each body. The multi-objective algorithm dynamically adjusts domain size, support locations, and load magnitudes during optimization. Due to repeated topology optimization calls within the genetic algorithm, the computational cost is significant. To address this, two reduction strategies are proposed: (I) using a coarser mesh and (II) reducing the number of iterations during the initial generations. As optimization progresses, Strategy I gradually refines the mesh, while Strategy II increases the maximum allowable iteration count. The effectiveness of both strategies is evaluated against a baseline (Reference) without reductions. By the 25th generation, all approaches achieve similar hypervolume values (Reference: 2.181; I: 2.112; II: 2.133). The computation time is substantially reduced (Reference: 42,226 s; I: 16,814 s; II: 21,674 s), demonstrating that both strategies effectively accelerate optimization without compromising solution quality.
]]>Computation doi: 10.3390/computation13070167
Authors: Khalid Hattaf
Most solutions of fractional differential equations (FDEs) that model real-world phenomena in various fields of science, industry, and engineering are complex and cannot be solved analytically. This paper mainly aims to present some useful results for studying the qualitative properties of solutions of FDEs involving the new generalized Hattaf mixed (GHM) fractional derivative, which encompasses many types of fractional operators with both singular and non-singular kernels. In addition, this study also aims to unify and generalize existing results under a broader operator. Furthermore, the obtained results are applied to some linear systems arising from medicine.
]]>Computation doi: 10.3390/computation13070166
Authors: Guohui Wang Yucheng Chen
The secret sharing schemes (SSS) are widely used in secure multi-party computing and distributed computing, and the access structure is the key to constructing secret sharing schemes. In this paper, we propose a method for constructing access structures based on hyperplane combinatorial structures over finite fields. According to the given access structure, the corresponding secret sharing scheme that can identify cheaters is given. This scheme enables the secret to be correctly restored if the cheater does not exceed the threshold, and the cheating behavior can be detected and located.
]]>Computation doi: 10.3390/computation13070165
Authors: Marco Roccetti Eugenio Maria De Rosa
Using a segmented linear regression model, we examined the seasonal profiles of weekly COVID-19 deaths data in Italy over a three-year-long period during which the SARS-CoV-2 Omicron and post-Omicron variants were predominant (September 2021–September 2024). Comparing the slopes of the regression segments, we were able to discuss the variation in steepness of the Italian COVID-19 mortality trend, identifying the corresponding growth/decline profile for each considered season. Our findings show that, although the COVID-19 weekly death mortality has been in a declining trend in Italy since the end of 2021 until the end of 2024, there have been increasing alterations in the COVID-19 deaths for all winters and summers of that period. These increasing mortality variations were more pronounced in winters than in summers, with an average progressive increase in the number of COVID-19 deaths, with each new week, of 55.75 and 22.90, in winters and in summers, respectively. We found that COVID-19 deaths were, instead, less frequent in the intermediate periods between winters and summers, with an average decrease of −38.01 COVID-19 deaths for each new week. Our segmented regression model has fitted well the observed COVID-19 deaths, as confirmed by the average value of the determination coefficients: 0.74, 0.63 and 0.70, respectively, for winters, summers and intermediate periods. In conclusion, favored by a general declining COVID-19 mortality trend in Italy in the period of interest, transient rises of the mortality have occurred both in winters and in summers, but received little attention because they have always been compensated by consistent downward drifts occurring during the intermediate periods between winters and summers.
]]>Computation doi: 10.3390/computation13070164
Authors: Matteo Maria Piredda Pietro Asinari
Although the lattice Boltzmann method (LBM) is relatively straightforward, it demands a well-crafted framework to handle the complex partial differential equations involved in multiphase flow simulations. For the first time to our knowledge, this work proposes a novel LBM framework to solve Eulerian–Eulerian multiphase flow equations without any finite difference correction, including very-large-density ratios and also a realistic relation for the drag coefficient. The proposed methodology and all reported LBM formulas can be applied to any dimension. This opens a promising venue for simulating multiphase flows in large High Performance Computing (HPC) facilities and on novel parallel hardware. This LBM framework consists of six coupled LBM schemes—running on the same lattice—ensuring an efficient implementation in large codes with minimum effort. The preliminary numeral results agree in an excellent way with the reference numerical solution obtained by a traditional finite difference solver.
]]>Computation doi: 10.3390/computation13070163
Authors: Andriy A. Avramenko Igor V. Shevchuk Andrii I. Tyrinov Iryna V. Dzevulska
A hydrodynamic homogeneous model has been developed for the motion of mutually impenetrable viscoelastic non-Newtonian fluids taking into account surface tension forces. Based on this model, numerical simulations of cytokinesis hydrodynamics were performed. The cytoplasm is considered a non-Newtonian viscoelastic fluid. The model allows for the calculation of the formation and rupture of the intercellular bridge. Results from an analytical analysis shed light on the influence of the viscoelastic fluid’s relaxation time on cytokinesis dynamics. A comparison of numerical simulation results and experimental data showed satisfactory agreement.
]]>Computation doi: 10.3390/computation13070162
Authors: Yukai Wu Guobi Ling Yaoke Shi
This paper presents a composite disturbance-tolerant control framework for quadrotor unmanned aerial vehicles (UAVs). By constructing an enhanced dynamic model that incorporates parameter uncertainties, external disturbances, and actuator faults and considering the inherent underactuated and highly coupled characteristics of the UAV, a novel robust adaptive sliding mode controller (RASMC) is designed. The controller adopts a hierarchical adaptive mechanism and utilizes a dual-loop composite adaptive law to achieve the online estimation of system parameters and fault information. Using the Lyapunov method, the asymptotic stability of the closed-loop system is rigorously proven. Simulation results demonstrate that, under the combined effects of external disturbances and actuator faults, the RASMC effectively suppresses position errors (<0.05 m) and attitude errors (<0.02 radians), significantly outperforming traditional ADRC and LQR control methods. Further analysis shows that the proposed adaptive law enables the precise online estimation of aerodynamic coefficients and disturbance boundaries during actual flights, with estimation errors controlled within ±10%. Moreover, compared to ADRC and LQR, RASMC reduces the settling time by more than 50% and the tracking overshoot by over 70% while using the (tanh(·)) approximation to eliminate chattering. Prototype experiments validate the fact that the method achieves centimeter-level trajectory tracking under real uncertainties, demonstrating the superior performance and robustness of the control framework in complex flight missions.
]]>Computation doi: 10.3390/computation13070161
Authors: Tamilarasan Ananth Kumar Rajendirane Rajmohan Sunday Adeola Ajagbe Oluwatobi Akinlade Matthew Olusegun Adigun
The rapid growth of ultra-dense mobile edge computing (UDEC) in 5G IoT networks has intensified energy inefficiencies and latency bottlenecks exacerbated by dynamic channel conditions and imperfect CSI in real-world deployments. This paper introduces POTMEC, a power optimization framework that combines a channel-aware adaptive power allocator using real-time SNR measurements, a MATLAB-trained RL model for joint offloading decisions and a decaying step-size algorithm guaranteeing convergence. Computational offloading is a productive technique to overcome mobile battery life issues by processing a few parts of the mobile application on the cloud. It investigated how multi-access edge computing can reduce latency and energy usage. The experiments demonstrate that the proposed model reduces transmission energy consumption by 27.5% compared to baseline methods while maintaining the latency below 15 ms in ultra-dense scenarios. The simulation results confirm a 92% accuracy in near-optimal offloading decisions under dynamic channel conditions. This work advances sustainable edge computing by enabling energy-efficient IoT deployments in 5G ultra-dense networks without compromising QoS.
]]>Computation doi: 10.3390/computation13070160
Authors: Sergiy Bushuyev Natalia Bushuyeva Ivan Nekrasov Igor Chumachenko
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the resilience and effectiveness of public health interventions under such conditions. By integrating a coupled SEIR–Infodemic–Panicdemic Model with war-specific factors, we simulate the interplay of infectious disease spread, misinformation dissemination, and panic dynamics over 1500 days in a Ukrainian city (Kharkiv). The model incorporates time-varying parameters to account for population displacement, healthcare disruptions, and periodic war events, reflecting the evolving conflict context. Sensitivity and risk–opportunity analyses reveal that disease transmission, misinformation, and infrastructure damage significantly exacerbate epidemic peaks, while AI-enabled interventions, such as fact-checking, mental health support, and infrastructure recovery, offer substantial mitigation potential. Qualitative assessments identify technical, organisational, ethical, regulatory, and military risks, alongside opportunities for predictive analytics, automation, and equitable healthcare access. Quantitative simulations demonstrate that risks, like increased displacement, can amplify infectious peaks by up to 28.3%, whereas opportunities, like enhanced fact-checking, can reduce misinformation by 18.2%. These findings provide a roadmap for leveraging AI to navigate BANI environments, offering actionable insights for public health practitioners in Ukraine and other crisis settings. The study underscores AI’s transformative role in fostering adaptive, data-driven strategies to achieve sustainable health outcomes amidst volatility and uncertainty.
]]>Computation doi: 10.3390/computation13070159
Authors: Juan Felipe Restrepo-Arias María José Montoya-Casta?o María Fernanda Moreno-De La Espriella John W. Branch-Bedoya
The accurate classification of cocoa pod ripeness is critical for optimizing harvest timing, improving post-harvest processing, and ensuring consistent quality in chocolate production. Traditional ripeness assessment methods are often subjective, labor-intensive, or destructive, highlighting the need for automated, non-invasive solutions. This study evaluates the performance of R-CNN-based deep learning models—Faster R-CNN and Mask R-CNN—for the detection and segmentation of cocoa pods across four ripening stages (0–2 months, 2–4 months, 4–6 months, and >6 months) using the RipSetCocoaCNCH12 dataset, which is publicly accessible, comprising 4116 labeled images collected under real-world field conditions, in the context of precision agriculture. Initial experiments using pretrained weights and standard configurations on a custom COCO-format dataset yielded promising baseline results. Faster R-CNN achieved a mean average precision (mAP) of 64.15%, while Mask R-CNN reached 60.81%, with the highest per-class precision in mature pods (C4) but weaker detection in early stages (C1). To improve model robustness, the dataset was subsequently augmented and balanced, followed by targeted hyperparameter optimization for both architectures. The refined models were then benchmarked against state-of-the-art YOLOv8 networks (YOLOv8x and YOLOv8l-seg). Results showed that YOLOv8x achieved the highest mAP of 86.36%, outperforming YOLOv8l-seg (83.85%), Mask R-CNN (73.20%), and Faster R-CNN (67.75%) in overall detection accuracy. However, the R-CNN models offered valuable instance-level segmentation insights, particularly in complex backgrounds. Furthermore, a qualitative evaluation using confidence heatmaps and error analysis revealed that R-CNN architectures occasionally missed small or partially occluded pods. These findings highlight the complementary strengths of region-based and real-time detectors in precision agriculture and emphasize the need for class-specific enhancements and interpretability tools in real-world deployments.
]]>Computation doi: 10.3390/computation13070158
Authors: Mahmoud Chaira Abdelkader Belhenniche Roman Chertovskih
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. In this paper, we present a novel approach to DDoS attack detection and mitigation that integrates state-of-the-art machine learning techniques with Blockchain-based Mobile Edge Computing (MEC) in IoT environments. Our solution leverages the decentralized and tamper-resistant nature of Blockchain technology to enable secure and efficient data collection and processing at the network edge. We evaluate multiple machine learning models, including K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Transformer architectures, and LightGBM, using the CICDDoS2019 dataset. Our results demonstrate that Transformer models achieve a superior detection accuracy of 99.78%, while RF follows closely with 99.62%, and LightGBM offers optimal efficiency for real-time detection. This integrated approach significantly enhances detection accuracy and mitigation effectiveness compared to existing methods, providing a robust and adaptive mechanism for identifying and mitigating malicious traffic patterns in IoT environments.
]]>Computation doi: 10.3390/computation13070157
Authors: Eugen Smolkin
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface. The governing boundary-value problem is reformulated as coupled initial-value problems and solved via a customized shooting method, enabling accurate calculation of complex propagation constants and field profiles despite the discontinuity at the graphene layer. We demonstrate that the graphene significantly alters the modal spectrum, introducing complex leaky and surface waves with attenuation due to graphene’s conductivity. Numerical results illustrate how the layers’ inhomogeneity and the graphene’s surface conductivity influence mode confinement and loss. These findings confirm the robustness of the proposed computational approach and provide insights relevant to the design and analysis of graphene-based waveguiding devices.
]]>Computation doi: 10.3390/computation13070156
Authors: Chafik Boulealam Hajar Filali Jamal Riffi Adnane Mohamed Mahraz Hamid Tairi
This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process. It employs real-time feedback to optimize the model’s weight adjustments, enhancing prediction accuracy and overall model performance. Importantly, FBVL preserves the integrity of the validation process by using prediction outcomes on the validation dataset to guide training adjustments, without directly accessing the dataset. Our empirical study conducted using the Iris dataset demonstrated the effectiveness of FBVL. The Iris dataset, comprising 150 samples from three species of Iris flowers, each characterized by four features, served as an ideal testbed for demonstrating FBVL’s effectiveness. The implementation of FBVL led to substantial performance improvements, surpassing the accuracy of the previous best result by approximately 7.14% and achieving a loss reduction greater than the previous methods by approximately 49.18%. When FBVL was applied to the Multimodal EmotionLines Dataset (MELD), it showcased its wide applicability across various datasets and domains. The model achieved a test-set accuracy of 70.08%, surpassing the previous best-reported accuracy by approximately 3.12%. These remarkable results underscore FBVL’s ability to optimize performance on established datasets and its capacity to minimize loss. Using our FBVL method, we achieved a test set f1_score micro of 70.07%, which is higher than the previous best-reported value for f1_score micro of 67.59%. These results demonstrate that FBVL enhances classification accuracy and model generalization, particularly in scenarios involving small or imbalanced datasets, offering practical benefits for designing more efficient and robust neural network architectures.
]]>Computation doi: 10.3390/computation13070155
Authors: Alexander E. Filippov Mikhail Popov Valentin L. Popov
The present work aimed to develop a simple simulation tool to support studies of slip and other non-traditional boundary conditions in solid–fluid interactions. A mesoscale particle model (movable automata) was chosen to enable performant simulation of all relevant aspects of the system, including phase changes, plastic deformation and flow, interface phenomena, turbulence, etc. The physical system under study comprised two atomically flat surfaces composed of particles of different sizes and separated by a model fluid formed by moving particles with repulsing cores of different sizes and long-range attraction. The resulting simulation method was tested under a variety of particle densities and conditions. It was shown that the particles can enter different (solid, liquid, and gaseous) states, depending on the effective temperature (kinetic energy caused by surface motion and random noise generated by spatially distributed Langevin sources). The local order parameter and formation of solid domains was studied for systems with varying density. Heating of the region close to one of the plates could change the density of the liquid in its proximity and resulted in chaotization (turbulence); it also dramatically changed the system configuration, the direction of the average flow, and reduced the effective friction force.
]]>Computation doi: 10.3390/computation13070154
Authors: Alexander Dudin Olga Dudina Sergei Dudin
An MAP/PH/N-type queuing system functioning within a finite-state Markovian random environment is studied. The random environment’s state impacts the number of available servers, the underlying processes of customer arrivals and service, and the impatience rate of customers. The impact on the state space of the underlying processes of customer arrivals and of the more general, as compared to exponential, service time distribution defines the novelty of the model. The behavior of the system is described by a multidimensional Markov chain that belongs to the classes of the level-independent quasi-birth-and-death processes or asymptotically quasi-Toeplitz Markov chains, depending on whether or not the customers are absolutely patient in all states of the random environment or are impatient in at least one state of the random environment. Using the tools of the corresponding processes or chains, a stationary analysis of the system is implemented. In particular, it is shown that the system is always ergodic if customers are impatient in at least one state of the random environment. Expressions for the computation of the basic performance measures of the system are presented. Examples of their computation for the system with three states of the random environment are presented as 3-D surfaces. The results can be useful for the analysis of a variety of real-world systems with parameters that may randomly change during system operation. In particular, they can be used for optimally matching the number of active servers and the bandwidth used by the transmission channels to the current rate of arrivals, and vice versa.
]]>Computation doi: 10.3390/computation13070153
Authors: Igor Pehnec Damir Sedlar Ivo Marinic-Kragic Damir Vu?ina
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using the B-spline interpolation function, the number of variables describing the level set function was decreased, enabling the application of evolutionary methods (genetic algorithms) in the topology optimization process. The traditional level set method is performed by using the Hamilton–Jacobi transport equation, which implies the use of gradient optimization methods that are prone to becoming stuck in local minima. Furthermore, the resulting optimal shapes are strongly dependent on the initial solution. The proposed topology optimization procedure, written in MATLAB R2013b, utilizes a genetic algorithm for global optimization, enabling it to locate the global optimum efficiently. To assess the acceleration and convergence capabilities of the proposed topology optimization method, a new genetic algorithm penalty operator was tested. This operator addresses the slow convergence issue typically encountered when the genetic algorithm optimization procedure nears a solution. By penalizing similar individuals within a population, the method aims to enhance convergence speed and overall performance. In complex examples (3D), the method can also function as a generator of good initial solutions for faster topology optimization methods (e.g., level set) that rely on such initial solutions. Both the proposed method and the traditional methods have their own advantages and limitations. The main advantage is that the proposed method is a global search method. This makes it robust against entrapment in local minima and independent of the initial solution. It is important to note that this evolutionary approach does not necessarily perform better in terms of convergence speed compared to gradient-based or other local optimization methods. However, once the global optimum has been found using the genetic algorithm, convergence can be accelerated using a faster local method such as gradient-based optimization. The application and usefulness of the method were tested on typical 2D cantilever beams and Michell beams.
]]>Computation doi: 10.3390/computation13070152
Authors: Yuecao Cao Qiang Zhang Shucheng Zhang Ying Tian Xiangwei Dong Xiaojun Song Dongxiang Wang
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to premature wear, high specific energy, and suboptimal performance. Topology optimization, as an advanced computational design method, offers transformative potential for lightweight, high-strength cutter structures and adaptive cutting process control. This review systematically examines recent advancements in topology-optimized cutter design and its integration with rock-cutting mechanics. The structural innovations in cutter geometry and materials are analyzed, emphasizing solutions for stress distribution, wear/fatigue resistance, and dynamic load adaptation. The numerical methods for modeling rock–tool interactions are introduced, including discrete element method (DEM) simulations, smoothed particle hydrodynamics (SPH) methods, and machine learning (ML)-enhanced predictive models. The cutting process optimization strategies that leverage topology optimization to balance objectives such as energy efficiency, chip formation control, and tool lifespan are evaluated.
]]>Computation doi: 10.3390/computation13060151
Authors: Mai Alammar Khalil El Hindi Hend Al-Khalifa
Semantic text chunking refers to segmenting text into coherently semantic chunks, i.e., into sets of statements that are semantically related. Semantic chunking is an essential pre-processing step in various NLP tasks e.g., document summarization, sentiment analysis and question answering. In this paper, we propose a hybrid chunking; two-steps semantic text chunking method that combines the effectiveness of unsupervised semantic text chunking based on the similarities between sentences embeddings and the pre-trained language models (PLMs) especially BERT by fine-tuning the BERT on semantic textual similarity task (STS) to provide a flexible and effective semantic text chunking. We evaluated the proposed method in English and Arabic. To the best of our knowledge, there is an absence of an Arabic dataset created to assess semantic text chunking at this level. Therefore, we created an AraWiki50k to evaluate our proposed text chunking method inspired by an existing English dataset. Our experiments showed that exploiting the fine-tuned pre-trained BERT on STS enhances results over unsupervised semantic chunking by an average of 7.4 in the PK metric and by an average of 11.19 in the WindowDiff metric on four English evaluation datasets, and 0.12 in the PK and 2.29 in the WindowDiff for the Arabic dataset.
]]>Computation doi: 10.3390/computation13060150
Authors: Alexander Zeifman Yacov Satin Ilia Usov Janos Sztrik
This paper deals with queueing models, in which the number of customers is described by a (inhomogeneous, in general) birth–death process. Depending on the choice of the type of intensities for the arrival and service of customers, the system can either have impatience (in which, as the queue length increases, the intensities of arrival decrease and the intensities of service increases) or attraction (in which, on the contrary, as the queue length increases, the intensities of the arrival of customers increase and service intensities decrease). In this article, various types of such models are considered, and their transient and limiting characteristics are computed. Furthermore, the rate of convergence and related bounds are also dealt with. Several numerical examples illustrate the proposed procedures.
]]>Computation doi: 10.3390/computation13060149
Authors: Seweryn Lipiński Szymon Sadkowski Pawe? Chwietczuk
Presented study evaluates and compares two deep learning models, i.e., YOLOv8n and Faster R-CNN, for automated detection of date fruits in natural orchard environments. Both models were trained and tested using a publicly available annotated dataset. YOLO, a single-stage detector, achieved a mAP@0.5 of 0.942 with a training time of approximately 2 h. It demonstrated strong generalization, especially in simpler conditions, and is well-suited for real-time applications due to its speed and lower computational requirements. Faster R-CNN, a two-stage detector using a ResNet-50 backbone, reached comparable accuracy (mAP@0.5 = 0.94) with slightly higher precision and recall. However, its training required significantly more time (approximately 19 h) and resources. Deep learning metrics analysis confirmed both models performed reliably, with YOLO favoring inference speed and Faster R-CNN offering improved robustness under occlusion and variable lighting. Practical recommendations are provided for model selection based on application needs—YOLO for mobile or field robotics and Faster R-CNN for high-accuracy offline tasks. Additional conclusions highlight the benefits of GPU acceleration and high-resolution inputs. The study contributes to the growing body of research on AI deployment in precision agriculture and provides insights into the development of intelligent harvesting and crop monitoring systems.
]]>Computation doi: 10.3390/computation13060148
Authors: Warut Boonphakdee Duangrat Hirunyasiri Peerayuth Charnsethikul
In inventory management, storage capacity constraints complicate multi-item lot-sizing decisions. As the number of items increases, deciding how much of each item to order without exceeding capacity becomes more difficult. Dynamic programming works efficiently for a single item, but when capacity constraints are nearly minimal across multiple items, novel heuristics are required. However, previous heuristics have mainly focused on inventory bound constraints. Therefore, this paper introduces push and pull heuristics to solve the multi-item uncapacitated lot-sizing problem under near-minimal capacities. First, a dynamic programming approach based on a network flow model was used to generate the initial replenishment plan for the single-item lot-sizing problem. Next, under storage capacity constraints, the push operation moved the selected replenishment quantities from the current period to subsequent periods to meet all demand requirements. Finally, the pull operation shifted the selected replenishment quantities from the current period into earlier periods, ensuring that all demand requirements were satisfied. The results of the random experiment showed that the proposed heuristic generated solutions whose performance compared well with the optimal solution. This heuristic effectively solves all randomly generated instances representing worst-case conditions, ensuring robust operation under near-minimal storage. For large-scale problems under near-minimal storage capacity constraints, the proposed heuristic achieved only small optimality gaps while requiring less running time. However, small- and medium-scale problems can be solved optimally by a Mixed-Integer Programming (MIP) solver with minimal running time.
]]>Computation doi: 10.3390/computation13060147
Authors: Shamaltha M. Wickramaarachchi S. A. Dewmini Suraweera D. M. Pasindu Akalanka V. Logeeshan Chathura Wanigasekara
The accurate estimation of the remaining useful life (RUL) of lithium-ion batteries (LIBs) is essential for ensuring safety and enabling effective battery health management systems. To address this challenge, data-driven solutions leveraging advanced machine learning and deep learning techniques have been developed. This study introduces a novel framework, Deep Neural Networks with Memory Features (DNNwMF), for predicting the RUL of LIBs. The integration of memory features significantly enhances the model’s accuracy, and an autoencoder is incorporated to optimize the feature representation. The focus of this work is on feature engineering and uncovering hidden patterns in the data. The proposed model was trained and tested using lithium-ion battery cycle life datasets from NASA’s Prognostic Centre of Excellence and CALCE Lab. The optimized framework achieved an impressive RMSE of 6.61%, and with suitable modifications, the DNN model demonstrated a prediction accuracy of 92.11% for test data, which was used to estimate the RUL of Nissan Leaf Gen 01 battery modules.
]]>Computation doi: 10.3390/computation13060146
Authors: Laura Sofía Avellaneda-Gómez Brandon Cortés-Caicedo Oscar Danilo Montoya Jesús M. López-Lezama
This article proposes an optimization methodology to address the joint placement as well as the capacity design of PV units and D-STATCOMs within unbalanced three-phase distribution systems. The proposed model adopts a mixed-integer nonlinear programming structure using complex-valued variables, with the objective of minimizing the total annual cost—including investment, maintenance, and energy purchases. A leader–followeroptimization framework is adopted, where the leader stage utilizes the Generalized Normal Distribution Optimization (GNDO) algorithm to generate candidate solutions, while the follower stage conducts power flow calculations through successive approximation to assess the objective value. The proposed approach is tested on 25- and 37-node feeders and benchmarked against three widely used metaheuristic algorithms: the Chu and Beasley Genetic Algorithm, Particle Swarm Optimization, and Vortex Search Algorithm. The results indicate that the proposed strategy consistently achieves highly cost-efficient outcomes. For the 25-node system, the cost is reduced from USD 2,715,619.98 to USD 2,221,831.66 (18.18%), and for the 37-node system, from USD 2,927,715.61 to USD 2,385,465.29 (18.52%). GNDO also surpassed the alternative algorithms in terms of solution precision, robustness, and statistical dispersion across 100 runs. All numerical simulations were executed using MATLAB R2024a. These findings confirm the scalability and reliability of the proposed method, positioning it as an effective tool for planning distributed energy integration in practical unbalanced networks.
]]>Computation doi: 10.3390/computation13060145
Authors: Jihene Abdennadher Moncef Boukthir
This study aimed to accurately simulate the main tidal characteristics in a regional domain featuring four open boundaries, with a primary focus on baroclinic tides. Such understanding is crucial for improving the representation of oceanic energy transfer and mixing processes in numerical models. To this end, the astronomical potential, load tide effects, and a wavelet-based analysis method were implemented in the three-dimensional ROMS model. The inclusion of the astronomical tidal and load tide aimed to enhance the accuracy of tidal simulations, while the wavelet method was employed to analyze the generation and propagation of internal tides from their source regions and to characterize their main features. Twin simulations with and without astronomical potential forcing were conducted to evaluate its influence on tidal elevations and currents. Model performance was assessed through comparison with tide gauge observations. Incorporating the potential forcing improves simulation accuracy, as the model fields successfully reproduced the main features of the barotropic tide and showed good agreement with observed amplitude and phase data. A complex principal component analysis was then applied to a matrix of normalized wavelet coefficients derived from the enhanced model outputs, enabling the characterization of horizontal modal propagation and vertical mode decomposition of both M2 and nonlinear M4 internal tides.
]]>Computation doi: 10.3390/computation13060144
Authors: Vladislav Kaverinskiy Illya Chaikovsky Anton Mnevets Tatiana Ryzhenko Mykhailo Bocharov Kyrylo Malakhov
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220). Each dataset includes 153 ECG and heart rate variability (HRV) features, including both conventional and novel diagnostic parameters obtained using a Universal Scoring System. The study aims to apply unsupervised clustering algorithms to ECG data to detect latent risk profiles related to heart failure, based on distinctive ECG features. The focus is on identifying patterns that correlate with cardiac health risks, potentially aiding in early detection and personalized care. We applied a combination of Uniform Manifold Approximation and Projection (UMAP) for dimensionality reduction and Hierarchical Density-Based Spatial Clustering (HDBSCAN) for unsupervised clustering. Models trained on one dataset were applied to the other to explore structural differences and detect latent predispositions to cardiac disorders. Both Euclidean and Manhattan distance metrics were evaluated. Features such as the QRS angle in the frontal plane, Detrended Fluctuation Analysis (DFA), High-Frequency power (HF), and others were analyzed for their ability to distinguish different patient clusters. In the Norm dataset, Euclidean distance clustering identified two main clusters, with Cluster 0 indicating a lower risk of heart failure. Key discriminative features included the “ALPHA QRS ANGLE IN THE FRONTAL PLANE” and DFA. In the patients’ dataset, three clusters emerged, with Cluster 1 identified as potentially high-risk. Manhattan distance clustering provided additional insights, highlighting features like “ST DISLOCATION” and “T AMP NORMALIZED” as significant for distinguishing between clusters. The analysis revealed distinct clusters that correspond to varying levels of heart failure risk. In the Norm dataset, two main clusters were identified, with one associated with a lower risk profile. In the patients’ dataset, a three-cluster structure emerged, with one subgroup displaying markedly elevated risk indicators such as high-frequency power (HF) and altered QRS angle values. Cross-dataset clustering confirmed consistent feature shifts between groups. These findings demonstrate the feasibility of ECG-based unsupervised clustering for early risk stratification. The results offer a non-invasive tool for personalized cardiac monitoring and merit further clinical validation. These findings emphasize the potential for clustering techniques to contribute to early heart failure detection and personalized monitoring. Future research should aim to validate these results in other populations and integrate these methods into clinical decision-making frameworks.
]]>Computation doi: 10.3390/computation13060143
Authors: Wenbin Song Hanqian Wu Chunlin Pu
Aiming at the problems of high labor cost, low detection efficiency, and insufficient detection accuracy of traditional pipe gallery disease detection methods, this paper proposes a pipe gallery disease segmentation model, PipeU-NetX, based on deep learning technology. By introducing the innovative down-sampling module MD-U, up-sampling module SC-U, and feature fusion module FFM, the model optimizes the feature extraction and fusion process, reduces the loss of feature information, and realizes the accurate segmentation of the pipe gallery disease image. In comparison with U-Net, FCN, and Deeplabv3+ models, PipeU-NetX achieved the best PA, MPA, FWIoU, and MIoU, which were 99.15%, 92.66%, 98.34%, and 87.63%, respectively. Compared with the benchmark model U-Net, the MIoU and MPA of the PipeU-NetX model increased by 4.64% and 3.92%, respectively, and the number of parameters decreased by 23.71%. The detection speed increased by 22.1%. The PipeU-NetX model proposed in this paper shows the powerful ability of multi-scale feature extraction and defect area adaptive recognition and provides an effective solution for the intelligent monitoring of the pipe gallery environment and accurate disease segmentation.
]]>Computation doi: 10.3390/computation13060142
Authors: Eulalia Martínez José A. Reyes Alicia Cordero Juan R. Torregrosa
In this work, using the weight function technique, we introduce a new family of fourth-order iterative methods optimal in the sense of Kung and Traub for scalar equations, generalizing Jarratt’s method. Through Taylor series expansions, we confirm that all members of this family achieve fourth-order convergence when derivatives up to the fourth order are bounded. Additionally, a stability analysis is performed on quadratic polynomials using complex discrete dynamics, enabling differentiation among the methods based on their stability. To demonstrate practical applicability, a numerical example illustrates the effectiveness of the proposed family. Extending our findings to Banach spaces, we conduct local convergence analyses on a specific subfamily containing Jarratt’s method, requiring only boundedness of the first derivative. This significantly broadens the method’s applicability to more general spaces and reduces constraints on higher-order derivatives. Finally, additional examples validate the existence and uniqueness of approximate solutions in Banach spaces, provided the initial estimate lies within the locally determined convergence radius obtained using majorizing functions.
]]>Computation doi: 10.3390/computation13060141
Authors: Jaykumar Ishvarbhai Prajapati Raja Das
The integration of machine learning and stock forecasting is attracting increased curiosity owing to its growing significance. This paper presents two main areas of study: predicting pattern trends for the next day and forecasting opening and closing prices using a new method that adds a dynamic hidden layer to artificial neural networks and employs a unique random k-fold cross-validation to enhance prediction accuracy and improve training. To validate the model, we are considering APPLE, GOOGLE, and AMAZON stock data. As a result, low root mean squared error (1.7208) and mean absolute error (0.9892) in both training and validation phases demonstrate the robust predictive performance of the dynamic ANN model. Furthermore, high R-values indicated a strong correlation between the experimental data and proposed model estimates.
]]>Computation doi: 10.3390/computation13060140
Authors: Carlos Javier Morales-Perez David Camarena-Martinez Juan Pablo Amezquita-Sanchez Jose de Jesus Rangel-Magdaleno Edwards Ernesto Sánchez Ramírez Martin Valtierra-Rodriguez
This work presents a lightweight and practical methodology for detecting inter-turn short-circuit faults in squirrel-cage induction motors under different mechanical load conditions. The proposed approach utilizes a one-dimensional convolutional neural network (1D-CNN) enhanced with residual blocks and trained on differentiated stator current signals obtained under different load mechanical conditions. This preprocessing step enhances fault-related features, enabling improved learning while maintaining the simplicity of a lightweight CNN. The model achieved classification accuracies above 99.16% across all folds in five-fold cross-validation and demonstrated the ability to detect faults involving as few as three short-circuited turns. Comparative experiments with the Multi-Scale 1D-ResNet demonstrate that the proposed method achieves similar or superior performance while significantly reducing training time. These results highlight the model’s suitability for real-time fault detection in embedded and resource-constrained industrial environments.
]]>Computation doi: 10.3390/computation13060139
Authors: Wen Cao Shoubao Xue Yujia Xu Huanyu Lin Hui Li Shengjun Deng Lin Li Yun Bai
When the shield machines are constructed in soft soil, excavation may be impeded by the accumulation of cutter head mud. Geological conditions and shield construction are identified as the main factors for cutter head mud formation, based on analysis of its mechanism. In addition to traditional metrics, the imperforation area in the cutter head center is incorporated into the analysis of shield construction factors. The Analytic Hierarchy Process (AHP) is utilized to establish a risk assessment model for shield cutter head mud cake, determining the weight of each sub-factor and enabling a preliminary risk assessment of mud cake occurrence. This study applies Analytic Hierarchy Process (AHP) to classify the factors affecting shield mud by using the Mawan cross-sea channel construction project (Moon Bay Avenue along the Yangtze River) as a case study. Each factor is scored and weighted according to established scoring criteria and evaluation formulas, and then the results of the risk of shield mud cake in the Mawan tunnel are obtained. Moreover, field observations validate the proposed risk model, with the derived risk index demonstrating strong alignment with actual data.
]]>Computation doi: 10.3390/computation13060138
Authors: Ilona A. Isupova Denis A. Rychkov
CrystalShift is an open-source computational tool tailored for the analysis, transformation, and conversion of crystallographic data, with a particular emphasis on organic crystal structures. It offers a comprehensive suite of features valuable for the computational study of solids: format conversion, crystallographic basis transformation, atomic coordinate editing, and molecular layer analysis. These options are especially valuable for studying the mechanical properties of molecular crystals with potential applications in organic materials science. Written in the C programming language, CrystalShift offers computational efficiency and compatibility with widely used crystallographic formats such as CIF, POSCAR, and XYZ. It provides a command-line interface, enabling seamless integration into research workflows while addressing specific challenges in crystallography, such as handling non-standard file formats and robust error correction. CrystalShift may be applied for both in-depth study of particular crystal structure origins and the high-throughput conversion of crystallographic datasets prior to DFT calculations with periodic boundary conditions using VASP code.
]]>Computation doi: 10.3390/computation13060137
Authors: Wilson Chango Mónica Mazón-Fierro Juan Erazo Guido Mazón-Fierro Santiago Logro?o Pedro Pe?afiel Jaime Sayago
This study addresses the critical need for effective data fusion strategies in pest prediction for pitahaya (dragon fruit) cultivation in the Ecuadorian Amazon, where heterogeneous data sources—such as environmental sensors and chlorophyll measurements—offer complementary but fragmented insights. Current agricultural monitoring systems often fail to integrate these data streams, limiting early pest detection accuracy. To overcome this, we compared early and late fusion approaches using comprehensive experiments. Multidimensionality is a central challenge: the datasets span temporal (hourly sensor readings), spatial (plot-level chlorophyll samples), and spectral (chlorophyll reflectance) dimensions. We applied dimensionality reduction techniques—PCA, KPCA (linear, polynomial, RBF), t-SNE, and UMAP—to preserve relevant structure and enhance interpretability. Evaluation metrics included the proportion of information retained (score) and cluster separability (silhouette score). Our results demonstrate that early fusion yields superior integrated representations, with PCA and KPCA-linear achieving the highest scores (0.96 vs. 0.94), and KPCA-poly achieving the best cluster definition (silhouette: 0.32 vs. 0.31). Statistical validation using the Friedman test (χ2 = 12.00, p = 0.02) and Nemenyi post hoc comparisons (p < 0.05) confirmed significant performance differences. KPCA-RBF performed poorly (score: 0.83; silhouette: 0.05), and although t-SNE and UMAP offered visual insights, they underperformed in clustering (silhouette < 0.12). These findings make three key contributions. First, early fusion better captures cross-domain interactions before dimensionality reduction, improving prediction robustness. Second, KPCA-poly offers an effective non-linear mapping suitable for tropical agroecosystem complexity. Third, our framework, when deployed in Joya de los Sachas, improved pest prediction accuracy by 12.60% over manual inspection, leading to more targeted pesticide use. This contributes to precision agriculture by providing low-cost, scalable strategies for smallholder farmers. Future work will explore hybrid fusion pipelines and sensor-agnostic models to extend generalizability.
]]>Computation doi: 10.3390/computation13060136
Authors: Uliana Zbezhkhovska Dmytro Chumachenko
Accurate forecasting of COVID-19 case numbers is critical for timely and effective public health interventions. However, epidemiological data’s irregular and noisy nature often undermines the predictive performance. This study examines the influence of four smoothing techniques—the rolling mean, the exponentially weighted moving average, a Kalman filter, and seasonal–trend decomposition using Loess (STL)—on the forecasting accuracy of four models: LSTM, the Temporal Fusion Transformer (TFT), XGBoost, and LightGBM. Weekly case data from Ukraine, Bulgaria, Slovenia, and Greece were used to assess the models’ performance over short- (3-month) and medium-term (6-month) horizons. The results demonstrate that smoothing enhanced the models’ stability, particularly for neural architectures, and the model selection emerged as the primary driver of predictive accuracy. The LSTM and TFT models, when paired with STL or the rolling mean, outperformed the others in their short-term forecasts, while XGBoost exhibited greater robustness over longer horizons in selected countries. An ANOVA confirmed the statistically significant influence of the model type on the MAPE (p = 0.008), whereas the smoothing method alone showed no significant effect. These findings offer practical guidance for designing context-specific forecasting pipelines adapted to epidemic dynamics and variations in data quality.
]]>Computation doi: 10.3390/computation13060135
Authors: Oke I. Idisi Tajudeen T. Yusuf Kolade M. Owolabi Kayode Oshinubi
This study investigates optimal intervention strategies for controlling the spread of two co-circulating strains of SARS-CoV-2 within the Nigerian population. A newly formulated epidemiological model captures the transmission dynamics of the dual-strain system and incorporates three key control mechanisms: vaccination, non-pharmaceutical interventions (NPIs), and therapeutic treatment. To identify the most effective approach, Pontryagin’s Maximum Principle is employed, enabling the derivation of an optimal control function that minimizes both infection rates and associated implementation costs. Through numerical simulations, this study evaluates the performance of individual, paired, and combined intervention strategies. Additionally, a cost-effectiveness assessment based on the Incremental Cost-Effectiveness Ratio (ICER) framework highlights the most economically viable option, while results suggest that the combined application of vaccination and treatment strategies offers superior control over dual-strain transmission and implementing all three strategies together ensures the most robust suppression of the outbreak.
]]>Computation doi: 10.3390/computation13060134
Authors: N. Lehdili P. Oswald H. D. Nguyen
The market risk measurement of a trading portfolio in banks, specifically the practical implementation of the value-at-risk (VaR) and expected shortfall (ES) models, involves intensive recalls of the pricing engine. Machine learning algorithms may offer a solution to this challenge. In this study, we investigate the application of the Gaussian process (GP) regression and multi-fidelity modeling technique as approximation for the pricing engine. More precisely, multi-fidelity modeling combines models of different fidelity levels, defined as the degree of detail and precision offered by a predictive model or simulation, to achieve rapid yet precise prediction. We use the regression models to predict the prices of mono- and multi-asset equity option portfolios. In our numerical experiments, conducted with data limitation, we observe that both the standard GP model and multi-fidelity GP model outperform both the traditional approaches used in banks and the well-known neural network model in term of pricing accuracy as well as risk calculation efficiency.
]]>Computation doi: 10.3390/computation13060133
Authors: Huthaifa I. Ashqar Ahmed Jaber Taqwa I. Alhadidi Mohammed Elhenawy
This study aims to comprehensively review and empirically evaluate the application of multimodal large language models (MLLMs) and Large Vision Models (VLMs) in object detection for transportation systems. In the first fold, we provide a background about the potential benefits of MLLMs in transportation applications and conduct a comprehensive review of current MLLM technologies in previous studies. We highlight their effectiveness and limitations in object detection within various transportation scenarios. The second fold involves providing an overview of the taxonomy of end-to-end object detection in transportation applications and future directions. Building on this, we proposed empirical analysis for testing MLLMs on three real-world transportation problems that include object detection tasks, namely, road safety attribute extraction, safety-critical event detection, and visual reasoning of thermal images. Our findings provide a detailed assessment of MLLM performance, uncovering both strengths and areas for improvement. Finally, we discuss practical limitations and challenges of MLLMs in enhancing object detection in transportation, thereby offering a roadmap for future research and development in this critical area.
]]>Computation doi: 10.3390/computation13060132
Authors: Francisco J. Baldán Diego García-Gil Carlos Fernandez-Basso
Advances in artificial intelligence (AI) are transforming assisted reproductive technologies by significantly enhancing fertility diagnostics. This review focuses on integrating AI with Computer-Aided Sperm Analysis (CASA) systems to improve assessments of sperm motility, morphology, and DNA integrity. By employing a spectrum of techniques, from classic machine learning (ML), often valued for its interpretability and efficiency with structured data, to deep learning (DL), which excels at extracting intricate features directly from image and video data, the field now achieves more accurate, automated, and high-throughput evaluations. These advanced systems offer significant advantages, including enhanced objectivity, improved consistency over manual methods, and the ability to detect subtle predictive patterns not discernible by human observation. The emergence of extensive open datasets and big data analytics has enabled the development of more robust models. However, limitations persist, such as the dependency on large, high-quality annotated datasets for training DL models, potential challenges in model generalizability across diverse clinical settings, and the “black-box” nature of some complex algorithms, alongside crucial needs for rigorous clinical validation, data standardization, and ethical management of sensitive information. Despite promising progress, these challenges must be addressed. Overall, this review outlines current innovations and future research directions essential for advancing personalized, efficient, and accessible fertility care.
]]>Computation doi: 10.3390/computation13060131
Authors: A. A. Elsadany Abdullah M. Adawi A. M. Awad
This study investigates dynamic behaviors within a competition Cournot duopoly framework incorporating consumer surplus, and social welfare through the bounded rationality method. The distinctive aspect of the competition game is the incorporation of discrete difference equations into the players’ optimization problems. Both rivals seek to achieve optimal quantity outcomes by maximizing their respective objective functions. The first firm seeks to enhance the average between consumer surplus and its profit, while the second firm focuses on its profit optimization with a social welfare component. The game map features four fixed points, with one being the Nash equilibrium point at the intersection of marginal objective functions. Our analysis explores equilibrium stability, dynamic complexities, basins of attraction, and the emergence of chaos through double routes via flip bifurcation and Neimark-Sacker bifurcations. We observe that increased adjustment speeds can destabilize the system, leading to a richness of dynamic complexity.
]]>Computation doi: 10.3390/computation13060130
Authors: Manuel A. Centeno-Bautista Andrea V. Perez-Sanchez Juan P. Amezquita-Sanchez David Camarena-Martinez Martin Valtierra-Rodriguez
Cardiovascular diseases are among the major global health problems. For example, sudden cardiac death (SCD) accounts for approximately 4 million deaths worldwide. In particular, an SCD event can subtly change the electrocardiogram (ECG) signal before onset, which is generally undetectable by the patient. Hence, timely detection of these changes in ECG signals could help develop a tool to anticipate an SCD event and respond appropriately in patient care. In this sense, this work proposes a novel computational methodology that combines the maximal overlap discrete wavelet packet transform (MODWPT) with stacked autoencoders (SAEs) to discover suitable features in ECG signals and associate them with SCD prediction. The proposed method efficiently predicts an SCD event with an accuracy of 98.94% up to 30 min before the onset, making it a reliable tool for early detection while providing sufficient time for medical intervention and increasing the chances of preventing fatal outcomes, demonstrating the potential of integrating signal processing and deep learning techniques within computational biology to address life-critical health problems.
]]>Computation doi: 10.3390/computation13060129
Authors: Husniddin Khayrullaev Andicha Zain Endre Kovács
Recently, new and nontrivial analytical solutions that contain the Kummer functions have been found for an equation system of two diffusion–reaction equations. The equations are coupled by two different types of linear reaction terms which have explicit time-dependence. We first make some corrections to these solutions in the case of two different reaction terms. Then, we collect eight efficient explicit numerical schemes which are unconditionally stable if the reaction terms are missing, and apply them to the system of equations. We show that they severely outperform the standard explicit methods when low or medium accuracy is required. Using parameter sweeps, we thoroughly investigate how the performance of the methods depends on the coefficients and parameters such as the length of the examined time interval. We obtained that, similarly to the single-equation case, the leapfrog–hopscotch method is usually the most efficient to solve these problems.
]]>Computation doi: 10.3390/computation13060128
Authors: Alyaa Hussain Naser Dahlia Khaled Bahlool
This study examines the impact of fear effects and cooperative hunting strategies in the context of intraguild predation food webs. The presented model includes a shared prey species with logistic growth and assumes that both the intraguild prey and intraguild predator draw their sustenance from the same resource. Using a Lyapunov function enables the system’s global stability to be proven. The impacts of key parameters on system stability are determined through bifurcation analysis. Numerical simulations show that even slight increases in the intensity of fear have drastic impacts on intraguild prey populations and, at higher levels, populations may go extinct. In addition, shifts in the parameter of cooperative hunting have a profound impact on the survival of the intraguild prey.
]]>Computation doi: 10.3390/computation13050127
Authors: Vlad Teodorescu Laura Obreja Bra?oveanu
Predicting corporate bankruptcy is a key task in financial risk management, and selecting a machine learning model with superior generalization performance is crucial for prediction accuracy. This study evaluates the effectiveness of k-fold cross-validation as a model selection strategy for random forest and XGBoost classifiers using a publicly available dataset of Taiwanese listed companies. We employ a nested cross-validation framework to assess the relationship between cross-validation (CV) and out-of-sample (OOS) performance on 40 different train/test data partitions. On average, we find k-fold cross-validation to be a valid selection technique when applied within a model class; however, k-fold cross-validation may fail for specific train/test splits. We find that 67% of model selection regret variability is explained by the particular train/test split, highlighting an irreducible uncertainty real world practitioners must contend with. Our study extensively explores hyperparameter tuning for both classifiers and highlights key insights. Additionally, we investigate practical implementation choices in k-fold cross-validation—such as the value of k or prediction strategies. We conclude that k-fold cross-validation is effective for model selection within a model class and on average, but it can be unreliable in specific cases or when comparing models from different classes—this latter issue warranting further investigation.
]]>Computation doi: 10.3390/computation13050126
Authors: Agustín Barrera Sánchez Rafael Campos Amezcua Héctor R. Azcaray Rivera Arturo Martínez Mata Andrés Blanco Ortega Cuauhtémoc Mazón Valadez César Humberto Guzmán Valdivia
Nowadays, the use of biomechanical devices in medical processes and industrial applications allows us to perform tasks in a simpler and faster way. In the medical field, these devices are becoming more and more common, especially in therapeutic applications. In the design and development of orthopedic devices, it is essential to consider the limbs’ kinematic, kinetic, and anthropometric conditions, as well as the implementation of control strategies (robust, PID, fuzzy, and impedance, among others). This work presents a virtual prototype of a knee orthosis and the implementation of a control system to follow a desired trajectory. Results are presented with the virtual prototype through a co-simulation between MSC Adams and MATLAB Simulink with fuzzy control, virtually replicating the gait cycle.
]]>Computation doi: 10.3390/computation13050125
Authors: Yuxiang Liu Xinzhong Xia Jingyang Zhang Kun Wang Bo Yu Mengmeng Wu Jinchao Shi Chao Ma Ying Liu Boyang Hu Xinying Wang Bo Wang Ruzhi Wang Bing Wang
This research introduces a novel hybrid machine learning framework for automated quality prediction and classification of silicon solar modules in production lines. Unlike conventional approaches that rely solely on encapsulation loss rate (ELR) for performance evaluation—a method limited to assessing encapsulation-related power loss—our framework integrates unsupervised clustering and supervised classification to achieve a comprehensive analysis. By leveraging six critical performance parameters (open circuit voltage (VOC), short circuit current (ISC), maximum output power (Pmax), voltage at maximum power point (VPM), current at maximum power point (IPM), and fill factor (FF)), we first employ k-means clustering to dynamically categorize modules into three performance classes: excellent performance (ELR: 0–0.77%), good performance (0.77–8.39%), and poor performance (>8.39%). This multidimensional clustering approach overcomes the narrow focus of traditional ELR-based methods by incorporating photoelectric conversion efficiency and electrical characteristics. Subsequently, five machine learning classifiers—decision trees (DT), random forest (RF), k-nearest neighbors (KNN), naive Bayes classifier (NBC), and support vector machines (SVMs)—are trained to classify modules, achieving 98.90% accuracy with RF demonstrating superior robustness. Pearson correlation analysis further identifies VOC, Pmax, and VPM as the most influential quality determinants, exhibiting strong negative correlations with ELR (−0.953, −0.993, −0.959). The proposed framework not only automates module quality assessment but also enhances production line efficiency by enabling real-time anomaly detection and yield optimization. This work represents a significant advancement in solar module evaluation, bridging the gap between data-driven automation and holistic performance analysis in photovoltaic manufacturing.
]]>Computation doi: 10.3390/computation13050124
Authors: Dongbo Liu Hao Chen Jianxin Wang Yeru Wang
Gene regulatory networks (GRNs) describe the interactions between transcription factors (TFs) and their target genes, playing a crucial role in understanding gene functions and how cells regulate gene expression under different conditions. Recent advancements in multi-omics technologies have provided new opportunities for more comprehensive GRN inference. Among these data types, gene expression and chromatin accessibility are particularly important, as they are key to distinguishing between direct and indirect regulatory relationships. However, existing methods primarily rely on gene expression data while neglecting biological information such as chromatin accessibility, leading to an increased occurrence of false positives in the inference results. To address the limitations of existing approaches, we propose MultiGNN, a supervised framework based on graph neural networks (GNNs). Unlike conventional GRN inference methods, MultiGNN leverages features extracted from both gene expression and chromatin accessibility data to predict regulatory interactions between genes. Experimental results demonstrate that MultiGNN consistently outperforms other methods across seven datasets. Additionally, ablation studies validate the effectiveness of our multi-omics feature integration strategy, offering a new direction for more accurate GRN inference.
]]>Computation doi: 10.3390/computation13050123
Authors: Olga Kozlovska Felix Sadyrbaev
In this article, fourth-order systems of ordinary differential equations are studied. These systems are of a special form, which is used in modeling gene regulatory networks. The nonlinear part depends on the regulatory matrix W, which describes the interrelation between network elements. The behavior of solutions heavily depends on this matrix and other parameters. We research the evolution of trajectories. Two approaches are employed for this. The first approach combines a fourth-order system of two two-dimensional systems and then introduces specific perturbations. This results in a system with periodic attractors that may exhibit sensitive dependence on initial conditions. The second approach involves extending a previously identified system with chaotic solution behavior to a fourth-order system. By skillfully scanning multiple parameters, this method can produce four-dimensional chaotic systems.
]]>Computation doi: 10.3390/computation13050122
Authors: Ahmed S. Rashed Mahy M. Mahdy Samah M. Mabrouk Rasha Saleh
Dengue fever (DF) is considered one of the most rapidly spreading infectious diseases, which is primarily transmitted to humans by bites from infected Aedes mosquitoes. The current investigation considers the spread patterns of dengue disease with and without host population awareness. It is assumed that some individuals decrease their contact with infected mosquitoes by adopting precautionary behaviors due to their awareness of the disease. Certain susceptible groups actively prevent mosquito bites, and a few infected are isolated to reduce further infections. The basic reproduction number and population dynamics are modeled by a system of fractional-order differential equations. The system of equations is solved using the Adomian Decomposition Method (ADM) since it converges rapidly to the exact solution and can give explicit analytical solutions. Solutions derived are analyzed and plotted for different fractional orders, providing useful insights into population dynamics and contributing to a better understanding of the initiation and control of disease.
]]>Computation doi: 10.3390/computation13050121
Authors: Vladislav Byankin Aleksandr Tynda Denis Sidorov Aliona Dreglea
Loaded Volterra integral equations represent a novel class of integral equations that have attracted considerable attention in recent years due to their numerous applications in various fields of science and engineering. This class of Volterra integral equations is characterized by the presence of a loading function, which complicates their theoretical and numerical analysis. In this paper, we study Volterra equations with locally loaded integral operators. The existence and uniqueness of their solutions are examined. A collocation-type method for the approximate solution of such equations is proposed, based on piecewise linear approximation of the exact solution. To confirm the convergence of the method, several numerical results for solving model problems are provided.
]]>Computation doi: 10.3390/computation13050120
Authors: Nataliya Shakhovska Bohdan Sydor Solomiia Liaskovska Olga Duran Yevgen Martyn Volodymyr Vira
One of the key barriers to neural network adoption is the lack of computational resources and high-quality training data—particularly for unique objects without existing datasets. This research explores methods for generating realistic synthetic images that preserve the visual properties of target objects, ensuring their similarity to real-world appearance. We propose a flexible approach for synthetic data generation, focusing on improved accuracy and adaptability. Unlike many existing methods that rely heavily on specific generative models and require retraining with each new version, our method remains compatible with state-of-the-art models without high computational overhead. It is especially suited for user-defined objects, leveraging a 3D representation to preserve fine details and support integration into diverse environments. The approach also addresses resolution limitations by ensuring consistent object placement within high-quality scenes.
]]>Computation doi: 10.3390/computation13050119
Authors: Aref Mehditabar Hossein Esfandian Seyed Sadegh Motallebi Hasankola
This study aims to comprehensively evaluate the mechanical performance of dry-spun twisted carbon nanotube (CNT) yarns (CNTYs) subjected to uniaxial tensile load. To this end, in contrast to earlier approaches, the current research lies in an innovative approach to incorporating the orthotropic properties of all hierarchical structures of a CNTY structure. The proposed bottom-up model ranges from nanoscale bundles to mesoscale fibrillar and, finally, microscale CNTYs. The proposed methodology distinguishes itself by addressing the interplay of constituents across multiple scale levels to compute the transverse properties (orthotropic nature). By doing so, rigidity and mass equivalent principles are adopted to introduce a replacement of the model by converting the truss structure containing two-node beam elements representing (vdW) van der Waals forces in a nanoscale bundle and inclined narrower bundles in mesoscale fibrillar used in previous works to the equivalent shell model. Followed by the evaluation of mechanical properties of nanoscale bundles, they are translated to the mesoscale level to quantify its orthotropic properties and then are fed into the microscale CNTY model. The results indicate that the resultant CNT bundle and fibrillar exhibit much lower transverse elastic modulus compared to those in the axial direction reported in the prior literature. For the sake of validation of the proposed method, the reproduced overall stress–strain curve of CNTYs is compared to that attained experimentally, showing excellent correlation. The presented theoretical approach provides a valuable tool for enhancing the understanding and predictive capabilities related to the mechanical performances of CNTY structures.
]]>Computation doi: 10.3390/computation13050118
Authors: Emina Hadzalic Adnan Ibrahimbegovic
This study presents a digital twin approach to quantifying the durability and failure risk of concrete gravity dams by integrating advanced numerical modelling with field monitoring data. Building on a previously developed finite element model for dam–reservoir interaction analysis, this research extends its application to the assessment of existing, fully operational dams by using digital twin technology. One such case study of a digital twin is given for the concrete gravity dam, Salakovac. The numerical model combines finite element formulations representing the dam as a nonisothermal saturated porous medium and the reservoir water as an acoustic fluid, ensuring realistic simulation results of their interactions. The selected finite element discrete approximations enable the detailed analysis of the dam failure mechanisms under varying extreme conditions, while simultaneously ensuring the consistent transfer of all fields (displacement, temperature, and pressure) at the dam–reservoir interface. A key aspect of this research is the calibration of the numerical model through the systematic definition of boundary conditions, external loads, and material parameters to ensure that the simulation results closely align with observed behaviour, thereby reflecting the current state of the ageing concrete dam. For the given case study of the Salakovac Dam, we illustrate the use of the digital twin to predict the failure mechanism of an ageing concrete dam for the chosen scenario of extreme loads.
]]>Computation doi: 10.3390/computation13050117
Authors: Pavel Balabanov Andrey Egorov Alexander Divin Alexander N. Pchelintsev
This article proposes a mathematical model for experimental estimation of the volumetric heat capacity and thermal conductivity of flat samples, in particular samples cut from potato tubers. The method involved using two pairs of samples, each of which includes the test sample and a reference sample. The pairs of samples were pre-cooled in a refrigerator to a temperature that was 10 to 15 °C below room temperature. Then, the samples were removed from the refrigerator and placed in an air thermostat at ambient temperature, with one pair of samples additionally blown with a weak air flow. Using a thermal imager, the surface temperatures of the samples were recorded. The temperature measurement results were processed using the proposed mathematical models. The temperature measurement results of the reference samples were used to determine the Bi numbers characterizing the heat exchange conditions on the surfaces of the test samples. Taking into account the found Bi values, the volumetric heat capacity and thermal conductivity were calculated using the formulas described in the article. The article also presents a diagram of the measuring device and a method for processing experimental data using the results of experiments as an example, where potato samples were used as the test samples, and polymethyl methacrylate samples were used as the reference samples. The studies were conducted at an ambient air temperature of 20 to 24 °C and at a Bi < 0.3. The specific heat capacity of the potato samples was in the range of 2120–3795 J/(kg·K), and the thermal conductivity was in the range of 0.17–0.5 W/(m·K) with a moisture content of 10–60%.
]]>Computation doi: 10.3390/computation13050116
Authors: Diego Armando Pérez-Rosero Diego Alejandro Manrique-Cabezas Jennifer Carolina Triana-Martinez Andrés Marino álvarez-Meza German Castellanos-Dominguez
Addressing unemployment is essential for formulating effective public policies. In particular, socioeconomic and monetary variables serve as essential indicators for anticipating labor market trends, given their strong influence on employment dynamics and economic stability. However, effective unemployment rate prediction requires addressing the non-stationary and non-linear characteristics of labor data. Equally important is the preservation of interpretability in both samples and features to ensure that forecasts can meaningfully inform public decision-making. Here, we provide an explainable framework integrating unsupervised and supervised machine learning to enhance unemployment rate prediction and interpretability. Our approach is threefold: (i) we gather a dataset for Colombian unemployment rate prediction including monetary and socioeconomic variables. (ii) Then, we used a Local Biplot technique from the widely recognized Uniform Manifold Approximation and Projection (UMAP) method along with local affine transformations as an unsupervised representation of non-stationary and non-linear data patterns in a simplified and comprehensible manner. (iii) A Gaussian Processes regressor with kernel-based feature relevance analysis is coupled as a supervised counterpart for both unemployment rate prediction and input feature importance analysis. We demonstrated the effectiveness of our proposed approach through a series of experiments conducted on our customized database focused on unemployment indicators in Colombia. Furthermore, we carried out a comparative analysis between traditional statistical techniques and modern machine learning methods. The results revealed that our framework significantly enhances both clustering and predictive performance, while also emphasizing the importance of input samples and feature selection in driving accurate outcomes.
]]>Computation doi: 10.3390/computation13050115
Authors: Ghader Ghasemi Reza Parvaz Yavar Khedmati Yengejeh
The rapid development of communication in the last decade has heightened the necessity to create a secure platform for transferring data, including images, more than in previous years. One of the methods of secure image transmission is the encryption method. In this work, an encryption algorithm for multiple images is introduced. In the first step of the proposed algorithm, a key generation algorithm based on a chaotic system and wavelet transform is introduced, and in the next step, the encryption algorithm is developed by introducing rearrange and shift functions based on a chaotic system. One of the most important tools used in the proposed algorithm is the hybrid chaotic system, which is obtained by fractional derivatives and the Cat map. Different types of tests used to study the behavior of this system demonstrate the efficiency of the proposed hybrid system. In the last step of the proposed method, various statistical and security tests, including histogram analysis, correlation coefficient analysis, data loss and noise attack simulations, have been performed on the proposed algorithm. The results show that the proposed algorithm performs well in secure transmission.
]]>Computation doi: 10.3390/computation13050114
Authors: Andres Felipe Zamora-Mu?oz Martha Lucia Orozco-Gutierrez Dany Mauricio Lopez-Santiago Jhoan Alejandro Montenegro-Oviedo Carlos Andres Ramos-Paja
The introduction of renewable energy sources in microgrids increases energy reliability, especially in small communities that operate disconnected from the main power grid. A battery energy storage system (BESS) plays an important role in microgrids because it helps mitigate the problems caused by the variability of renewable energy sources, such as unattended demand and voltage instability. However, a BESS increases the cost of a microgrid due to the initial investment and maintenance, requiring a cost–benefit analysis to determine its size for each application. This paper addresses this problem by formulating a method that combines economic and technical approaches to provide favorable relations between costs and performances. Mixed integer linear programming (MILP) is used as optimization algorithm to size BESS, which is applied to an isolated community in Colombia located at Isla Múcura. The results indicate that the optimal BESS requires a maximum power of 17.6 kW and a capacity of 76.61 kWh, which is significantly smaller than the existing 480 kWh system. Thus, a reduction of 83.33% in the number of batteries is obtained. This optimized size reduces operational costs while maintaining technical reliability. The proposed method aims to solve an important problem concerning state policy and the universalization of electrical services, providing more opportunities to decision makers in minimizing the costs and efforts in the implementation of energy storage systems for isolated microgrids.
]]>Computation doi: 10.3390/computation13050113
Authors: Igor Nesteruk
Mathematical modeling allows taking into account registered and hidden infections to make correct predictions of epidemic dynamics and develop recommendations that can reduce the negative impact on public health and the economy. A model for visible and hidden epidemic dynamics (published by the author in February 2025) has been generalized to account for the effects of re-infection and newborns. An analysis of the equilibrium points, examples of numerical solutions, and comparisons with the dynamics of real epidemics are provided. A stable quasi-equilibrium for the particular case of almost completely hidden epidemics was also revealed. Numerical results and comparisons with the COVID-19 epidemic dynamics in Austria and South Korea showed that re-infections, newborns, and hidden cases make epidemics endless. Newborns can cause repeated epidemic waves even without re-infections. In particular, the next epidemic peak of pertussis in England is expected to occur in 2031. With the use of effective algorithms for parameter identification, the proposed approach can ensure effective predictions of visible and hidden numbers of cases and infectious and removed patients.
]]>Computation doi: 10.3390/computation13050112
Authors: Roberto Cavassi Antonio Cicone Enza Pellegrino Haomin Zhou
The decomposition of a signal is a fundamental tool in many fields of research, including signal processing, geophysics, astrophysics, engineering, medicine, and many more. By breaking down complex signals into simpler oscillatory components, we can enhance the understanding and processing of the data, unveiling hidden information contained in them. Traditional methods, such as Fourier analysis and wavelet transforms, which are effective in handling mono-dimensional stationary signals, struggle with non-stationary datasets and they require the selection of predefined basis functions. In contrast, the empirical mode decomposition (EMD) method and its variants, such as Iterative Filtering (IF), have emerged as effective non-linear approaches, adapting to signals without any need for a priori assumptions. To accelerate these methods, the Fast Iterative Filtering (FIF) algorithm was developed, and further extensions, such as Multivariate FIF (MvFIF) and Multidimensional FIF (FIF2), have been proposed to handle higher-dimensional data. In this work, we introduce the Multidimensional and Multivariate Fast Iterative Filtering (MdMvFIF) technique, an innovative method that extends FIF to handle data that varies simultaneously in space and time, like the ones sampled using sensor arrays. This new algorithm is capable of extracting Intrinsic Mode Functions (IMFs) from complex signals that vary in both space and time, overcoming limitations found in prior methods. The potentiality of the proposed method is demonstrated through applications to artificial and real-life signals, highlighting its versatility and effectiveness in decomposing multidimensional and multivariate non-stationary signals. The MdMvFIF method offers a powerful tool for advanced signal analysis across many scientific and engineering disciplines.
]]>Computation doi: 10.3390/computation13050111
Authors: Norma Flores-Holguín Juan Frau Daniel Glossman-Mitnik
Kapakahines A–G are natural products isolated from the marine sponge Carteriospongia sp., characterized by complex molecular architectures composed of fused rings and diverse functional groups. Preliminary studies have indicated that some of these peptides may exhibit cytotoxic and antitumor activities, which has prompted interest in further exploring their chemical and pharmacokinetic properties. Computational chemistry—particularly Conceptual Density Functional Theory (CDFT)-based Computational Peptidology (CP)—offers a valuable framework for investigating such compounds. In this study, the CDFT-CP approach is applied to analyze the structural and electronic properties of Kapakahines A–G. Alongside the calculation of global and local reactivity descriptors, predicted ADMET (Absorption, Distribution, Metabolism, Excretion, and Toxicity) profiles and pharmacokinetic parameters, including pKa and LogP, are evaluated. The integrated computational analysis provides insights into the stability, reactivity, and potential drug-like behavior of these marine-derived cyclopeptides and contributes to the theoretical groundwork for future studies aimed at optimizing their bioactivity and safety profiles.
]]>Computation doi: 10.3390/computation13050110
Authors: Roger Z. Ríos-Mercado L. Carolina Riascos-álvarez Jonathan F. Bard
Kidney-paired donation programs assist patients in need of a kidney to swap their incompatible donor with another incompatible patient–donor pair for a suitable kidney in return. The kidney exchange problem (KEP) is a mathematical optimization problem that consists of finding the maximum set of matches in a directed graph representing the pool of incompatible pairs. Depending on the specific framework, these matches can come in the form of (bounded) directed cycles or directed paths. This gives rise to a family of KEP models that have been studied over the past few years. Several of these models require an exponential number of constraints to eliminate cycles and chains that exceed a given length. In this paper, we present enhancements to a subset of existing models that exploit the connectivity properties of the underlying graphs, thereby rendering more compact and tractable models in both cycle-only and cycle-and-chain versions. In addition, an efficient algorithm is developed for detecting violated constraints and solving the problem. To assess the value of our enhanced models and algorithm, an extensive computational study was carried out comparing with existing formulations. The results demonstrated the effectiveness of the proposed approach. For example, among the main findings for edge-based cycle-only models, the proposed (*PRE(i)) model uses a new set of constraints and a small subset of the full set of length-k paths that are included in the edge formulation. The proposed model was observed to achieve a more than 98% reduction in the number of such paths among all tested instances. With respect to cycle-and-chain formulations, the proposed (*ReSPLIT) model outperformed Anderson’s arc-based (AA) formulation and the path constrained-TSP formulation on all instances that we tested. In particular, when tested on a difficult sets of instances from the literature, the proposed (*ReSPLIT) model provided the best results compared to the AA and PC-based models.
]]>Computation doi: 10.3390/computation13050109
Authors: Pan Zhang Lei Guo Zhicheng Huang Zhoupeng Rao Ying Zhang Zhi Sun Rui Xu Deng Li
The identification of critical fault information plays a crucial role in ensuring the reliability and stability of power systems. However, existing fault-identification technologies heavily rely on high-dimensional sensor data, which often contain redundant and noisy information. Moreover, conventional data preprocessing approaches typically employ fixed time windows, neglecting variations in fault characteristics under different system states. This limitation may lead to incomplete feature selection and ineffective dimensionality reduction, ultimately affecting the accuracy of fault classification. To address these challenges, this study proposes a method of critical fault information identification that integrates a scalable time window with Principal Component Analysis (PCA). The proposed method dynamically adjusts the time window size based on real-time system conditions, ensuring more flexible data capture under diverse fault scenarios. Simultaneously, PCA is employed to reduce dimensionality, extract representative features, and remove redundant noise, thereby enhancing the quality of the extracted fault information. Furthermore, this approach lays a solid foundation for the subsequent application of deep learning-based fault-diagnosis techniques. By improving feature extraction and reducing computational complexity, the proposed method effectively alleviates the workload of operation and maintenance personnel while enhancing fault classification accuracy. Our experimental results demonstrate that the proposed method significantly improves the precision and robustness of fault identification in power systems.
]]>Computation doi: 10.3390/computation13050108
Authors: Ibtehal Baazeem Hend Al-Khalifa Abdulmalik Al-Salman
Assessing text readability is important for helping language learners and readers select texts that match their proficiency levels. Research in cognitive psychology, which uses behavioral data such as eye-tracking and electroencephalogram signals, has shown its effectiveness in detecting cognitive activities that correlate with text difficulty during reading. However, Arabic, with its distinctive linguistic characteristics, presents unique challenges in readability assessment using cognitive data. While behavioral data have been employed in readability assessments, their full potential, particularly in Arabic contexts, remains underexplored. This paper presents the development of the first Arabic eye-tracking corpus, comprising eye movement data collected from Arabic-speaking participants, with a total of 57,617 words. Subsequently, this corpus can be utilized to evaluate a broad spectrum of text-based and gaze-based features, employing machine learning and deep learning methods to improve Arabic readability assessments by integrating cognitive data into the readability assessment process.
]]>Computation doi: 10.3390/computation13050106
Authors: Prasad Adhav María Bélen Farias
Cesarean sections (CSs) are essential in certain medical contexts but, when overused, can carry risks for both the mother and child. In the unique multilingual landscape of Luxembourg, this study explores whether non-medical factors—such as the language spoken—affect CS rates. Through a survey conducted with women in Luxembourg, we first applied statistical methods to investigate the influence of various social and linguistic parameters on CS. Additionally, we explored how these factors relate to the feelings of happiness and respect women experience during childbirth. Subsequently, we employed four machine learning models to predict CS based on the survey data. Our findings reveal that women who speak Spanish have a statistically higher likelihood of undergoing a CS than women that do not report speaking that language. Furthermore, those who had CS report feeling less happy and respected compared to those with vaginal births. With both limited and augmented data, our models achieve an average accuracy of approximately 81% in predicting CS. While this study serves as an initial exploration into the social aspects of childbirth, it underscores the need for larger-scale studies to deepen our understanding and to inform policy-makers and health practitioners that support women during their pregnancies and births. This preliminary research advocates for further investigation to address this complex social issue comprehensively.
]]>Computation doi: 10.3390/computation13050107
Authors: Juan P. Cardona José U. Castellanos Luis C. Gutiérrez
The present work aims to validate the computational simulation model that determines the static deflection experienced by rectangular flat plates along the longest edge when subjected to uniform and hydrostatic pressures, proposed as a state observer for active control. The plates are isotropic and simply supported on their four edges. The pressures do not exceed the plate material’s elastic limit. The solutions in the analytical form of the partial differential equation of flat plates established by Kirchoff theory are first determined by Fourier double series. On the other hand, simulations are performed using the Finite Element Computational Model (MEF) using ANSYS Workbench17 software.
]]>Computation doi: 10.3390/computation13050105
Authors: Gamal M. Ismail Galal M. Moatimid Stylianos V. Kontomaris Livija Cveticanin
The study offers a comprehensive investigation of periodic solutions in highly nonlinear oscillator systems, employing advanced analytical and numerical techniques. The motivation stems from the urgent need to understand complex dynamical behaviors in physics and engineering, where traditional linear approximations fall short. This work precisely applies He’s Frequency Formula (HFF) to provide theoretical insights into certain classes of strongly nonlinear oscillators, as illustrated through five broad examples drawn from various scientific and engineering disciplines. Additionally, the novelty of the present work lies in reducing the required time compared to the classical perturbation techniques that are widely employed in this field. The proposed non-perturbative approach (NPA) effectively converts nonlinear ordinary differential equations (ODEs) into linear ones, equivalent to simple harmonic motion. This method yields a new frequency approximation that aligns closely with the numerical results, often outperforming existing approximation techniques in terms of accuracy. To aid readers, the NPA is thoroughly explained, and its theoretical predictions are validated through numerical simulations using Mathematica Software (MS). An excellent agreement between the theoretical and numerical responses highlights the robustness of this method. Furthermore, the NPA enables a detailed stability analysis, an area where traditional methods frequently underperform. Due to its flexibility and effectiveness, the NPA presents a powerful and efficient tool for analyzing highly nonlinear oscillators across various fields of engineering and applied science.
]]>Computation doi: 10.3390/computation13050104
Authors: Ibtisam Aldawish Rabha W. Ibrahim
Coastal erosion and sediment transport dynamics in Iraq’s shoreline are increasingly affected by extreme climate conditions, including rising sea levels and intensified storms. This study introduces a novel fractional-order sediment transport model, incorporating a modified gamma function-based differential operator to accurately describe erosion rates and stabilization effects. The proposed model evaluates two key stabilization approaches: artificial stabilization (breakwaters and artificial reefs) and bio-engineering solutions (coral reefs, sea-grass, and salt marshes). Numerical simulations reveal that the proposed structures provide moderate sediment retention but degrade over time, leading to diminishing effectiveness. In contrast, bio-engineering solutions demonstrate higher long-term resilience, as natural ecosystems self-repair and adapt to changing environmental conditions. Under extreme climate scenarios, enhanced bio-engineering retains 55% more sediment than no intervention, compared to 35% retention with artificial stabilization.The findings highlight the potential of hybrid coastal protection strategies combining artificial and bio-based stabilization. Future work includes optimizing intervention designs, incorporating localized field data from Iraq’s coastal zones, and assessing cost-effectiveness for large-scale implementation.
]]>Computation doi: 10.3390/computation13050103
Authors: Oleksii Sirotkin Arsentii Prymushko Ivan Puchko Hryhoriy Kravtsov Mykola Yaroshynskyi Volodymyr Artemchuk
Modern computational models tend to become more and more complex, especially in fields like computational biology, physical modeling, social simulation, and others. With the increasing complexity of simulations, modern computational architectures demand efficient parallel execution strategies. This paper proposes a novel approach leveraging the reactive stream paradigm as a general-purpose synchronization protocol for parallel simulation. We introduce a method to construct simulation graphs from predefined transition functions, ensuring modularity and reusability. Additionally, we outline strategies for graph optimization and interactive simulation through push and pull patterns. The resulting computational graph, implemented using reactive streams, offers a scalable framework for parallel computation. Through theoretical analysis and practical implementation, we demonstrate the feasibility of this approach, highlighting its advantages over traditional parallel simulation methods. Finally, we discuss future challenges, including automatic graph construction, fault tolerance, and optimization strategies, as key areas for further research.
]]>Computation doi: 10.3390/computation13050102
Authors: Elliackin Figueiredo Clodomir Santana Hugo Valadares Siqueira Mariana Macedo Attilio Converti Anu Gokhale Carmelo Bastos-Filho
The Fish School Search (FSS) algorithm is a metaheuristic known for its distinctive exploration and exploitation operators and cumulative success representation approach. Despite its success across various problem domains, the FSS presents issues due to its high number of parameters, making its performance susceptible to improper parameterization. Additionally, the interplay between its operators requires a sequential execution in a specific order, requiring two fitness evaluations per iteration for each individual. This operator’s intricacy and the number of fitness evaluations pose the issue of costly fitness functions and inhibit parallelization. To address these challenges, this paper proposes a Simplified Fish School Search (SFSS) algorithm that preserves the core features of the original FSS while redesigning the fish movement operators and introducing a new turbulence mechanism to enhance population diversity and robustness against stagnation. The SFSS also reduces the number of fitness evaluations per iteration and minimizes the algorithm’s parameter set. Computational experiments were conducted using a benchmark suite from the CEC 2017 competition to compare the SFSS with the traditional FSS and five other well-known metaheuristics. The SFSS outperformed the FSS in 84% of the problems and achieved the best results among all algorithms in 10 of the 26 problems.
]]>Computation doi: 10.3390/computation13040101
Authors: Caixia Chen Yong Yang Yonghua Yan
Micro-vortex generators (MVGs) are widely utilized as passive devices to control flow separation in supersonic boundary layers by generating ring-like vortices that mitigate shock-induced effects. This study employs large eddy simulation (LES) to investigate the flow structures in a supersonic boundary layer (Mach 2.5, Re = 5760) controlled by two MVGs installed in tandem, with spacings varying from 11.75 h to 18.75 h (h = MVG height), alongside a single-MVG reference case. A fifth-order WENO scheme and third-order TVD Runge–Kutta method were used to solve the unfiltered Navier–Stokes equations, with the Liutex method applied to visualize vortex structures. Results reveal that tandem MVGs produce complex vortex interactions, with spanwise and streamwise vortices merging extensively, leading to a significant reduction in vortex intensity due to mutual cancellation. A momentum deficit forms behind the second MVG, weakening that from the first, while the boundary layer energy thickness doubles compared to the single-MVG case, indicating increased energy loss. Streamwise vorticity distributions and instantaneous streamlines highlight intensified interactions with closer spacings, yet this complexity diminishes overall flow control effectiveness. Contrary to expectations, the tandem configuration does not enhance boundary layer control but instead weakens it, as evidenced by reduced vortex strength and amplified energy dissipation. These findings underscore a critical trade-off in tandem MVG deployment, suggesting that while vortex interactions enrich flow complexity, they may compromise the intended control benefits in supersonic flows, with implications for optimizing MVG arrangements in practical applications.
]]>Computation doi: 10.3390/computation13040100
Authors: Roman V. Ivanov
This paper presents a new model of the term structure of interest rates that is based on the continuous Ho–Lee one. In this model, we suggest that the drift and volatility coefficients depend additionally on a generalized inverse Gaussian (GIG) distribution. Analytical expressions for the bond price and its moments are found in the new GIG continuous Ho–Lee model. Also, we compute in this model the prices of European call and put options written on bond. The obtained formulas are determined by the values of the Humbert confluent hypergeometric function of two variables. A numerical experiment shows that the third and fourth moments of the bond prices differentiate substantially in the continuous Ho–Lee and GIG continuous Ho–Lee models.
]]>Computation doi: 10.3390/computation13040099
Authors: Chieh-Hsun Wu
This paper generalizes the efficient matrix decomposition method for solving the finite-difference (FD) discretized three-dimensional (3D) Poisson’s equation using symmetric 27-point, 4th-order accurate stencils to adapt more boundary conditions (BCs), i.e., Dirichlet, Neumann, and Periodic BCs. It employs equivalent Dirichlet nodes to streamline source term computation due to BCs. A generalized eigenvalue formulation is presented to accommodate the flexible 4th-order stencil weights. The proposed method significantly enhances computational speed by reducing the 3D problem to a set of independent 1D problems. As compared to the typical matrix inversion technique, it results in a speed-up ratio proportional to n4, where n is the number of nodes along one side of the cubic domain. Accuracy is validated using Gaussian and sinusoidal source fields, showing 4th-order convergence for Dirichlet and Periodic boundaries, and 2nd-order convergence for Neumann boundaries due to extrapolation limitations—though with lower errors than traditional 2nd-order schemes. The method is also applied to vortex-in-cell flow simulations, demonstrating its capability to handle outer boundaries efficiently and its compatibility with immersed boundary techniques for internal solid obstacles.
]]>Computation doi: 10.3390/computation13040098
Authors: Manuel J. C. S. Reis
The rapid expansion of 5G networks and edge computing has amplified security challenges in Internet of Things (IoT) environments, including unauthorized access, data tampering, and DDoS attacks. This paper introduces EdgeChainGuard, a hybrid blockchain-based authentication framework designed to secure 5G-enabled IoT systems through decentralized identity management, smart contract-based access control, and AI-driven anomaly detection. By combining permissioned and permissionless blockchain layers with Layer-2 scaling solutions and adaptive consensus mechanisms, the framework enhances both security and scalability while maintaining computational efficiency. Using synthetic datasets that simulate real-world adversarial behaviour, our evaluation shows an average authentication latency of 172.50 s and a 50% reduction in gas fees compared to traditional Ethereum-based implementations. The results demonstrate that EdgeChainGuard effectively enforces tamper-resistant authentication, reduces unauthorized access, and adapts to dynamic network conditions. Future research will focus on integrating zero-knowledge proofs (ZKPs) for privacy preservation, federated learning for decentralized AI retraining, and lightweight anomaly detection models to enable secure, low-latency authentication in resource-constrained IoT deployments.
]]>Computation doi: 10.3390/computation13040097
Authors: Christos Kountzakis Vasileia Tsachouridou-Papadatou
The aim of the first part of this paper is to show whether a set of Proper Efficient Points and a set of Pareto Efficient Points coincide in Euclidean spaces. In the second part of the paper, we show that supporting prices, which are actually strictly positive, do exist for a large class of exchange economies. A consequence of this result is a generalized form of the Second Welfare theorem. The properties of the cones’ bases are significant for this purpose.
]]>Computation doi: 10.3390/computation13040096
Authors: Fahd A. Ghanem M. C. Padma Hudhaifa M. Abdulwahab Ramez Alkhatib
The field of text summarization has evolved from basic extractive methods that identify key sentences to sophisticated abstractive techniques that generate contextually meaningful summaries. In today’s digital landscape, where an immense volume of textual data is produced every day, the need for concise and coherent summaries is more crucial than ever. However, summarizing short texts, particularly from platforms like Twitter, presents unique challenges due to character constraints, informal language, and noise from elements such as hashtags, mentions, and URLs. To overcome these challenges, this paper introduces a deep learning framework for automated short text summarization on Twitter. The proposed approach combines bidirectional encoder representations from transformers (BERT) with a transformer-based encoder–decoder architecture (TEDA), incorporating an attention mechanism to improve contextual understanding. Additionally, long short-term memory (LSTM) networks are integrated within BERT to effectively capture long-range dependencies in tweets and their summaries. This hybrid model ensures that generated summaries remain informative, concise, and contextually relevant while minimizing redundancy. The performance of the proposed framework was assessed using three benchmark Twitter datasets—Hagupit, SHShoot, and Hyderabad Blast—with ROUGE scores serving as the evaluation metric. Experimental results demonstrate that the model surpasses existing approaches in accurately capturing key information from tweets. These findings underscore the framework’s effectiveness in automated short text summarization, offering a robust solution for efficiently processing and summarizing large-scale social media content.
]]>Computation doi: 10.3390/computation13040095
Authors: Cesar U. Solis Jorge Morales Carlos M. Montelongo
This work establishes a simple algorithm to recover an information vector from a predefined database available every time. It is considered that the information analyzed may be incomplete, damaged, or corrupted. This algorithm is inspired by Hopfield Neural Networks (HNN), which allows the recursive reconstruction of an information vector through an energy-minimizing optimal process, but this paper presents a procedure that generates results in a single iteration. Images have been chosen for the information recovery application to build the vector information. In addition, a filter is added to the algorithm to focus on the most important information when reconstructing data, allowing it to work with damaged or incomplete vectors, even without losing the ability to be a non-iterative process. A brief theoretical introduction and a numerical validation for recovery information are shown with an example of a database containing 40 images.
]]>Computation doi: 10.3390/computation13040094
Authors: Hui Yuan Yaoke Shi Long Li Guobi Ling Jingxiao Zeng Zhiwen Wang
Accurate fault diagnosis in analog circuits faces significant challenges owing to the inherent complexity of fault data patterns and the limited feature representation capabilities of conventional methodologies. Addressing the limitations of current convolutional neural networks (CNN) in handling heterogeneous fault characteristics, this study presents an efficient channel attention-enhanced multi-input CNN framework (ECA-MI-CNN) with dual-domain feature fusion, demonstrating three key innovations. First, the proposed framework addresses multi-domain feature extraction through parallel CNN branches specifically designed for processing time-domain and frequency-domain features, effectively preserving their distinct characteristic information. Second, the incorporation of an efficient channel attention (ECA) module between convolutional layers enables adaptive feature response recalibration, significantly enhancing discriminative feature learning while maintaining computational efficiency. Third, a hierarchical fusion strategy systematically integrates time-frequency domain features through concatenation and fully connected layer transformations prior to classification. Comprehensive simulation experiments conducted on Butterworth low-pass filters and two-stage quad op-amp dual second-order low-pass filters demonstrate the framework’s superior diagnostic capabilities. Real-world validation on Butterworth low-pass filters further reveals substantial performance advantages over existing methods, establishing an effective solution for complex fault pattern recognition in electronic systems.
]]>Computation doi: 10.3390/computation13040093
Authors: Chafik Boulealam Hajar Filali Jamal Riffi Adnane Mohamed Mahraz Hamid Tairi
Existing neural network architectures often struggle with two critical limitations: (1) information loss during dataset length standardization, where variable-length samples are forced into fixed dimensions, and (2) inefficient feature selection in single-modal systems, which treats all features equally regardless of relevance. To address these issues, this paper introduces the Deep Multi-Components Neural Network (DMCNN), a novel architecture that processes variable-length data by regrouping samples into components of similar lengths, thereby preserving information that traditional methods discard. DMCNN dynamically prioritizes task-relevant features through a component-weighting mechanism, which calculates the importance of each component via loss functions and adjusts weights using a SoftMax function. This approach eliminates the need for dataset standardization while enhancing meaningful features and suppressing irrelevant ones. Additionally, DMCNN seamlessly integrates multimodal data (e.g., text, speech, and signals) as separate components, leveraging complementary information to improve accuracy without requiring dimension alignment. Evaluated on the Multimodal EmotionLines Dataset (MELD) and CIFAR-10, DMCNN achieves state-of-the-art accuracy of 99.22% on MELD and 97.78% on CIFAR-10, outperforming existing methods like MNN and McDFR. The architecture’s efficiency is further demonstrated by its reduced trainable parameters and robust handling of multimodal and variable-length inputs, making it a versatile solution for classification tasks.
]]>Computation doi: 10.3390/computation13040092
Authors: Wilson Chango Pamela Bu?ay Juan Erazo Pedro Aguilar Jaime Sayago Angel Flores Geovanny Silva
The purpose of this study lies in developing a comparison of neural network-based models for vehicular congestion prediction, with the aim of improving urban mobility and mitigating the negative effects associated with traffic, such as accidents and congestion. This research focuses on evaluating the effectiveness of different neural network architectures, specifically Transformer and LSTM, in order to achieve accurate and reliable predictions of vehicular congestion. To carry out this research, a rigorous methodology was employed that included a systematic literature review based on the PRISMA methodology, which allowed for the identification and synthesis of the most relevant advances in the field. Likewise, the Design Science Research (DSR) methodology was applied to guide the development and validation of the models, and the CRISP-DM (Cross-Industry Standard Process for Data Mining) methodology was used to structure the process, from understanding the problem to implementing the solutions. The dataset used in this study included key variables related to traffic, such as vehicle speed, vehicular flow, and weather conditions. These variables were processed and normalized to train and evaluate various neural network architectures, highlighting LSTM and Transformer networks. The results obtained demonstrated that the LSTM-based model outperformed the Transformer model in the task of congestion prediction. Specifically, the LSTM model achieved an accuracy of 0.9463, with additional metrics such as a loss of 0.21, an accuracy of 0.93, a precision of 0.29, a recall of 0.71, an F1-score of 0.42, an MSE of 0.07, and an RMSE of 0.26. In conclusion, this study demonstrates that the LSTM-based model is highly effective for predicting vehicular congestion, surpassing other architectures such as Transformer. The integration of this model into a simulation environment showed that real-time traffic information can significantly improve urban mobility management. These findings support the utility of neural network architectures in sustainable urban planning and intelligent traffic management, opening new perspectives for future research in this field.
]]>Computation doi: 10.3390/computation13040091
Authors: Araek Tashkandi
Eye illness detection is important, yet it can be difficult and error-prone. In order to effectively and promptly diagnose eye problems, doctors must use cutting-edge technologies. The goal of this research paper is to develop a sophisticated model that will help physicians detect different eye conditions early on. These conditions include age-related macular degeneration (AMD), diabetic retinopathy, cataracts, myopia, and glaucoma. Common eye conditions include cataracts, which cloud the lens and cause blurred vision, and glaucoma, which can cause vision loss due to damage to the optic nerve. The two conditions that could cause blindness if treatment is not received are age-related macular degeneration (AMD) and diabetic retinopathy, a side effect of diabetes that destroys the blood vessels in the retina. Problems include myopic macular degeneration, glaucoma, and retinal detachment—severe types of nearsightedness that are typically defined as having a refractive error of –5 diopters or higher—are also more likely to occur in people with high myopia. We intend to apply a user-friendly approach that will allow for faster and more efficient examinations. Our research attempts to streamline the eye examination procedure, making it simpler and more accessible than traditional hospital approaches. Our goal is to use deep learning and machine learning to develop an extremely accurate model that can assess medical images, such as eye retinal scans. This was accomplished by using a huge dataset to train the machine learning and deep learning model, as well as sophisticated image processing techniques to assist the algorithm in identifying patterns of various eye illnesses. Following training, we discovered that the CNN, VggNet, MobileNet, and hybrid Deep Learning models outperformed the SVM and Random Forest machine learning models in terms of accuracy, achieving above 98%. Therefore, our model could assist physicians in enhancing patient outcomes, raising survival rates, and creating more effective treatment plans for patients with these illnesses.
]]>Computation doi: 10.3390/computation13040090
Authors: Andriy A. Avramenko Igor V. Shevchuk Nataliia P. Dmitrenko Andriy I. Tyrinov Yiliia Y. Kovetska Andriy S. Kobzar
The article presents results of an analytical and numerical modeling of electron fluid motion and heat generation in a rectangular conductor at an alternating electric potential. The analytical solution is based on the series expansion solution (Fourier method) and double series solution (method of eigenfunction decomposition). The numerical solution is based on the lattice Boltzmann method (LBM). An analytical solution for the electric current was obtained. This enables estimating the heat generation in the conductor and determining the influence of the parameters characterizing the conductor dimensions, the parameter M (phenomenological transport time describing momentum-nonconserving collisions), the Knudsen number (mean free path for momentum-nonconserving) and the Sh number (frequency) on the heat generation rate as an electron flow passes through a conductor.
]]>Computation doi: 10.3390/computation13040089
Authors: Gurami Tsitsiashvili Marina Osipova
The paper considers queuing networks with prohibitions on transitions between network nodes that determine the protocol of their operation. In the graph of transient network intensities, a set of base vertices is allocated (proportional to the number of edges), and we raise the question of whether some subset of it can be deleted such that the stationary distribution of the Markov process describing the functioning of the network is preserved. In order for this condition to be fulfilled, it is sufficient that the set of vertices of the graph of transient intensities, after the removal of a subset of the base vertices, coincide with the set of states of the Markov process and that this graph be connected. It is proved that the ratio of the number of remaining base vertices to their total number n converges to one-half for n→∞. In this paper, we are looking for graphs of transient intensities with a minimum (in some sense) set of edges for open and closed service networks.
]]>Computation doi: 10.3390/computation13040088
Authors: Anubhav Gupta Islam Osman Mohamed S. Shehata W. John Braun Rebecca E. Feldman
Medical imaging tasks are very challenging due to the lack of publicly available labeled datasets. Hence, it is difficult to achieve high performance with existing deep learning models as they require a massive labeled dataset to be trained effectively. An alternative solution is to use pre-trained models and fine-tune them using a medical imaging dataset. However, all existing models are pre-trained using natural images, which represent a different domain from that of medical imaging; this leads to poor performance due to domain shift. To overcome these problems, we propose a pre-trained backbone using a collected medical imaging dataset with a self-supervised learning tool called a masked autoencoder. This backbone can be used as a pre-trained model for any medical imaging task, as it is trained to learn a visual representation of different types of medical images. To evaluate the performance of the proposed backbone, we use four different medical imaging tasks. The results are compared with existing pre-trained models. These experiments show the superiority of our proposed backbone in medical imaging tasks.
]]>