SPECIAL ISSUE
Enhancing Decision-Making with Machine Learning and AI: Privacy-Preserving Access Control for Data Analysis and Applications
Editorial Note
We are happy to present this special issue titled "Enhancing Decision-Making with Machine Learning and AI: Privacy-Preserving Access Control for Data Analysis and Applications", which brings together innovative research contributions at the intersection of artificial intelligence, machine learning, and data privacy.
In the digital era, the exponential growth of data has empowered organizations and researchers to make smarter, data-driven decisions. However, this progress comes with significant concerns surrounding privacy, security, and ethical data use. This special issue aims to address these concerns by exploring how intelligent techniques and privacy-preserving mechanisms can be integrated into decision-making systems and analytical models.
The articles featured in this issue cover a wide spectrum of topics, including but not limited to:
- Secure and privacy-aware machine learning frameworks
- AI-driven access control models for sensitive data
- Federated learning and edge computing for secure analytics
- Cryptographic techniques in AI-based decision systems
- Ethical implications and regulatory perspectives in AI-enabled data analysis
- Real-world applications in healthcare, finance, smart cities, and beyond
This curated collection represents the collaborative efforts of researchers, practitioners, and domain experts who are driving advancements in responsible AI and secure data analytics. We are confident that the contributions in this issue will serve as a valuable resource for academics, developers, and decision-makers seeking to build intelligent systems that balance utility with privacy.
We extend our sincere gratitude to all the authors for their high-quality submissions and to the reviewers for their timely and insightful feedback. We also thank the editorial team and the journal management for their continued support in bringing this special issue to fruition.
We hope that the research presented in this issue will inspire further exploration and innovation in building AI systems that are not only intelligent but also secure, ethical, and trustworthy.
Guest Editors
Dr. P. Raviraj
Professor & Head,
Dept. of Computer Science & Engineering,
GSSS Institute of Engineering and Technology for Women,
K R S Road, Metagalli,
Mysuru, Karnataka State, India
Dr. Jingshan Huang
Professor, School of Computing,
Professor, College of Medicine,
University of South Alabama,
307 N University Blvd, United States.
Dr. Maode Ma
Research Professor,
KINDI Center for Computing Research,
College of Engineering,
Qatar University, Doha, Qatar.
Papers
Predicting Schizophrenia from Clinical Data with Feature Selection and CNN Logistic Regression Classification
The diagnosis of schizophrenia during early stages allows for better treatment results becauseof its complex debilitating nature. The treatment and diagnosis of schizophrenia early on canbenefit greatly from machine learning model applications. The proposed data processingapproach incorporates SMOTE (Synthetic Minority Over sampling Technique) for addressingclass imbalance together with Normalization for standardizing feature values. A hybrid modeldesign made up of Lasso Regression for feature selection and CNN Logistic Regression forclassification helps process both imbalanced classes and features with different scales in thedataset. Lasso Regression serves as an algorithm to select important features from datasetswhich enhances both the interpretability and the predictive power of the model. Theintegrated system applies CNN components for extracting complex neuroimaging patternsfrom MRI data followed by Logistic Regression models to produce binary schizophreniaversus non schizophrenia predictions . Space hierarchies of features are discovered efficientlyby CNN operations allowing Logistic Regression to perform its linear classification duties.The two powerful analysis techniques of traditional machine learning combined with deeplearning in CNN Logistic Regression create a comprehensive and efficient approach to schizophrenia diagnosis. Utilizing the proposed hybrid approach produces superiorclassification performance compared to existing methods according to initial results thusproviding a valuable tool for clinical applications in schizophrenia detection.
A Predictive Model for Heart Disease Risk Using ESVMRF and Advanced Feature Selection Techniques
A major health problem affecting the population worldwide is heart disease. As a result, it is difficult for the heart to function normally and can eventually lead to heart failure, as this condition typically impairs the heart\'s ability to circulate blood efficiently throughout the body. A significant part of the impact on quality of life is due to the complexity of early heart disease diagnosis methods. A timely diagnosis of heart disease is vital to preventing and treating heart failure effectively. A prognostic method for detecting cardiac abnormalities can be used to determine the likelihood of developing heart disease. However, one of the most critical and complex tasks is to predict heart disease risk factors with sensitivity, specificity, accuracy, and false scores. To solve this problem, we propose an Enhanced Support Vectorized Machine-based Risk Factor (ESVMRF) technique to more accurately predict heart disease risk levels. Furthermore, we use the Standardized Min-Max Scalar (SMMS) method, a data normalization technique, with pre-processing to analyse the missing attribute values in the heart disease dataset. Furthermore, outliers can be removed by analysing the upper or lower bounds using the Inter-Quartile Range-Based Outlier Removal (IQROR) method. Next, we employ the Relation Feature Weight Vector (RFWV) algorithm, a feature selection technique, to select the optimal features and analyse weights. Using Machine Learning (ML) techniques, the proposed ESVMRF method can be used to predict the risk level of heart patients based on ML techniques to differentiate between healthy patients and patients with heart disease. Further, the method has been shown to be able to predict the risk factors for heart disease patients based on metrics such as sensitivity, specificity, accuracy, precision, F-score, and performance evaluation. The accuracy of this method increased to 95.4%, which provides a valuable and reliable method for predicting heart disease.
PREDICTING THE ONSET OF DIABETES USING RNN-RBFN: A ROBUST DEEP LEARNING MODEL FOR PRECISION IN HEALTHCARE
According to the International Diabetes Federation (IDF), diabetes is a chronic condition that is just one of the glaringly increasing global health issues of our day. Unprocessed diabetes can harm the kidneys, nerves, and heart and initiate eye illnesses like diabetic retinopathy. The healthcare cost of diabetes was 760.3 billion in 2019 and is projected to upsurge to 824.7 billion by 2030, according to IDF. Identifying people at risk can counteract health issues, improve the superiority of life, and hold back many costs. In healthcare systems, the wrong assortment of lesser groups of patients is costlier than the misclassification of healthy individuals. However, ML algorithms assume a uniform misclassification error and a symmetric class distribution. It is often difficult to analyse diabetes data to predict the onset of diabetes. A Recurrent Neural Network with Radial Basis Function Networks (RNN-RBFN) method based on DL methodology is proposed to resolve the issue. Initially, we collected the diabetic dataset at the Mendeley Data website, which includes a healthcare margin. The proposed method has three stages: pre-processing, feature selection, and classification. In the first stage, the diabetic dataset will be pre-processed based on the Imputation method. It is used to swap the absent data in the dataset with alternative characteristics so that the maximum of the data or info in the dataset is retained. This method is used because it is impractical to remove data from the dataset every time and can significantly reduce its size. The next stage is featuring selection, which uses linear discriminant analysis (LDA) to plot the data in a low-dimensional space to maximize class separation. This is accomplished by finding a set of linear discriminants that maximizes the proportion of between-class variance to within-class variance. The last stage is classification; RNN-RBFN can learn more complex patterns and improve classification performance compared to using either network in isolation. Investigational outcomes determine that our method beats the previous method in precision, recall, F1 score, accuracy, and time complexity.
IMPROVING CARDIOVASCULAR DISEASE DETECTION WITH AUTOE-SOM THROUGH DEEP LEARNING AND FEATURE SELECTION
CVD is responsible for numerous deaths worldwide and impacts millions of individuals annually. Its impact spans all demographics, making it a major public health concern worldwide. In CVD, the heart does not pump sufficient blood to the rest of the body. A precise analysis of CVD is critical for its prevention and treatment. CVD encompasses four categories, including coronary heart disease, transient ischemic attack, peripheral artery disease, and aortic disease. These symptoms often manifest in the elderly and can be mistaken for other diseases, making an accurate diagnosis challenging and potentially leading to fatal outcomes. ML and DL algorithms play a crucial role in disease analysis, as they can categorize or predict outcomes. However, a common challenge in ML is the high dimensionality of data, which complicates feature selection. Thus, operations performed on this data require more training, leading to training loss, low precision, and recall rate to degrade the detection accuracy. An Autoencoder Methodology Combined with the Self Organizing Maps (Auto E-SOM) algorithm was proposed to resolve the issue. Initially, the CVD dataset at the Kaggle repository includes the healthcare margins. The proposed system works under pre-processing, feature selection, and classification phases. The CVD dataset will be pre-processed based on the normalization method in the starting phase. It converts features in CVD datasets to a standard scale, improving the efficiency and accuracy of output values. The primary purpose of normalization is to remove possible biases and distortions that may occur due to differences in feature sizes. The second phase is featuring selection; the relevant features related to CVD are extracted from the dataset using the SVM method. The third phase is classification. The Auto E-SOM assists in comprehending the fundamental structure of the dataset, recognizing patterns, and enhancing classification accuracy through prioritizing the most significant features. This novel aims to develop a reliable and accurate CVD detection model, which could contribute to healthcare specialists in initial diagnosis and interference, eventually leading to improved patient outcomes. Investigational outcomes determine that our method beats the previous method in precision, recall, F1 score, accuracy, and time complexity.
DEEP LEARNING BASED AGRICULTURE TRAFFIC PREDICTION USING GATED RECURSIVE DEEP NEURAL NETWORK IOT ENVIRONMENT
IoT traffic data can be used to apply deep learning technology to agriculture. The application of deep learning algorithms in agriculture has improved, resource management, and decision-making with positive outcomes. Precision agriculture can assist regulate crop yields by applying nutrients only when necessary to enhance crop quality and lessen adverse environmental effects. This is made possible by IoT capabilities. Identifying redundant data traffic remains a major research challenge in the field of IoT-based agricultural automation, despite the fact that numerous solutions have been offered. Additionally, farmers do not receive the necessary information about water levels, soil conditions, etc. However, using standard methods of processing makes it challenging to handle the complexity and dynamic of low data transfer. To overcome these issues, the suggested Gated Recursive Deep Neural Network (GRDNN) makes use to increase the prediction accuracy of network traffic in an IoT environment and transfer learning to address the issue of insufficient IoT Agriculture data. Then the behavioural support factors of features are filtered through Entity Spectral spider optimization algorithm to find the relational feature weight. Then fitness function evaluates the scalar variations to get the support index based on the behavioural variation. The features get indexed by class by reference to the average mean weight and selected through Cluster-Scalar Entity Feature Selection (CSEFS) model. The selected features are progressed into Soft-max Neural Network (SMNN) which is optimized with GRDNN classifier. This optimization evaluates the entity scalar values to train the features in the Gated Recursive Deep Neural Network (GRDNN) to classify the result. The proposed system predicts the threading levels to categorize the Risk level based on features threshold margins to improve the security as well in IoT. This proposed system achieves high performance compared to the other system in precision rate with least false rate to attain the accuracy.
Efficient Extraction of Insights from large scale Social Media Data through Distributed Deep Learning
The requirement for effective pattern and insight extraction has grown critical due to the\nexponential growth of social media data. The authors of this study offer a fresh strategy to\novercome this difficulty by using deep learning methods on massive amounts of social media\ndata stored in different places. To get valuable insights into user behaviour, preferences, and\nsocial interactions, the primary goal is to facilitate the fast and accurate detection of relevant\npatterns, feelings, and trends. The proposed system integrates distributed computing with\ndeep learning approaches to handle and analyse massive volumes of social media data\nconcurrently . Leveraging the power of distributed systems significantly improves both\nscalability and processing speed, enabling real time or near real time analysis of dynamic\nsocial media material. Unstructured text, video, and user interactions are only some of how\nsocial media data differs from traditional datasets, and the deep learning models are altered\nand improved to manage them. This research presents an approach to evaluating a CNN\nmodel\'s interpretability. The suggested Dolphin Echolocation Algorithm (DEA) was added\nduring the feature selection phase and used to fine tune the CNN\'s filter weights. Through a\nprocess of backtracking analysis on model prediction results, the approach we present can\nconduct multi angle analysis on the discriminant outcomes of multi classified text and multi\nlabel classification tasks. Data diversity, volume, and velocity challenges are typical in large\nscale social media datasets, and this study helps address those issues. The suggested method\nalso strives to use as few resources as possible, which reduces costs and helps the\nenvironment both of which are crucial when dealing with distributed databases of any size.\nExtensive tests are performed on various datasets obtained from major social media sites to\nvalidate the efficacy and efficiency of the proposed framework. The outcomes show that our\nstrategy is more precise, faster, and scalable than the status quo.
Integrating Deep Learning and Traditional Learning for Content-Based Image Retrieval System in Lymphoma Diagnosis
Lymphoma diagnosis relies on accurate and timely identification of lymphoma patterns, and Content-Based Image Retrieval (CBIR) systems leveraging medical imaging have shown promise in this regard. This study proposes a novel framework that integrates deep learning and traditional learning methodologies to enhance CBIR systems for lymphoma diagnosis. The approach uses convolutional neural networks (CNNs), a type of neural network that is deep, to derive a high degree information from photos of lymphoma. For the categorization and extraction of related lymphoma photos, these attributes are subsequently fed into conventional learning algorithms like support vector machines (SVMs) or random forests (RFs). The suggested system makes use of the advantages of both conventional learning as well as neural networks. Deep learning models capture complex and abstract features from lymphoma images, enabling improved discrimination between lymphoma subtypes. Meanwhile, traditional learning algorithms provide transparent decision rules, enhancing the interpretability and trustworthiness of the CBIR system. Through the integration of these techniques, the developed system aims to facilitate efficient analysis and retrieval of relevant lymphoma images, aiding clinicians in the analytic development. The performance of the system will be compared against baseline deep learning models and traditional learning approaches separately. Evaluation metrics such as precision, recall, and accuracy will be employed to assess the retrieval performance and diagnostic accuracy of the CBIR system. The anticipated outcome of this research is an improved CBIR system that enables clinicians to effectively identify and retrieve lymphoma images.
An Ensemble Deep Learning Model for Mental Depressive Disorder Classification and Suicidal Ideation through Tweets
Early detection of mental depression prevents the severe repercussions of long-term depressive symptoms such as suicidal thoughts and ideation. With the widespread use of social media and the internet these days, prompt identification of emotional reactions is essential. Therefore, monitoring social media texts like Facebook comments or tweets could be highly helpful in detecting the mental depression. With the advent of Artificial Intelligence (AI) techniques it is possible for the early detection and classification of mental depressive detection and suicidal ideation. The proposed approach uses the labelled twitter tweets to classify the depression intensity. The performance evaluation is done based on the four ensemble models known as CNN, LSTM, LSTM+ RNN and BERT for the classification of tweets based on depressive and non-depressive classes. The parameters used for the evaluation are accuracy, precision, recall and F1-Score. From the result analysis it is inferred that average high accuracy and precision is obtained as 97% for LSTM. Similarly average high recall and F1-Score is obtained as 96% and 97% respectively. Furthermore, the optimization helps to enhance the proposed classification as well as it makes suitable for identification of suicidal ideation. The suggested method thus achieves the better performance for the earlier identification mental depression based on emotions in many social media users, demonstrating the viability of CNN, BERT, RNN and LSTM.
Optimization of Food Supply Chain and Forecasting Unsustainable Waste Disposal Using Machine Learning
Artificial Intelligence empowers the assessment of transportation, estimation, and stock management not leading to more products that ending with reduced waste. This research deals with analysis of information so as to develop a system with reduced error, enhanced benefit to the venture. This also paves way for reducing of food waste in supply chain unit. Contrasting, online food supply affects the environment also. As an outcome, new methods in machine learning can possibly improve the productivity of food organizations, likewise diminishing their ecological effect. These make the financial gain for the organization like garbage removal, which leads to reduced carbon impression.
Deep learning based Heart disease Risk prediction using Soft Max deep Scaling Gated Adverbial Neural Network
Heart disease is one of the diseases that is responsible for the death of millions of people each
year worldwide. It is considered one of the main diseases in middle aged and elderly people.
The increasing rate of heart disease cases, high mortality rate, and medical treatment
expenses necessitate early diagnosis of symptoms. Prediction of cardio vascular disease is a
critical challenge in the area of clinical data analysis Data science related Diagnosis and
prediction of heart related diseases requires more precision, perfection and correctness. Deep
Learning (DL) models are becoming increasingly popular for use in a wide range of clinical
diagnostic tasks. Making accurate predictions is essential for such tasks because the results
can have a big impact on patients and reduce mortality. DL algorithms for efficient
identification of heart disease plays an important role in healthcare, especially in cardiology.
Previously Machine learning algorithms used for feature extraction has some problems, there
are less efficient for complex risk stages and increased computation time, and Feature
extraction is inaccurate for classification, and unreliable. When extensive data exist, deep
learning techniques can overcome some of those limitations. To overcome the issues, the
performance of the deep learning based Software Cost Estimation Technique using SoftMax
deep Scaling Gated Adverbial Neural Network (SmDSAN2) for ac curate software cost
estimation. Initially we collected the dataset form standard repository and initially we started the first step is data pre-processing for reducing null and unbalance values based on Min
Max Z score normalization (Mm Z score). To utilizing the feature margin range is using the
threshold values and it�s based on Fuzzified Support Margin Impact Rate (FSMIR). And third
step is Feature selection based on threshold values for selecting the maximum weighted range
and also selecting the nearest values based on Particle Swarm Intelligence (PSI). It iteratively
assigned to ranks feature importance, removes the least important, and rebuilds the model
until desired feature subset is obtained. Final stage is classification is based on SoftMax deep
Scaling Gated Adversial Neural Network (SmDSAN2) is evaluating the heart disease risk
prediction and reducing the false rate for analysing the It has the ability to predict the risks
based on SmDSAN2algorithm has shown high accuracy for Predict the dataset . Predicting
techniques used by categorizing the approximation of risk level developed using Fuzzified
margin rate to identify in multiclass dataset. SmDSAN2algorithm will help the Healthcare
environment to follow rules standard and also reduce the risk.
Harmonic Elimination Using a Novel Optimation Algorithm in a Multilevel Inverter
This research describes a seven-level RSS-MLI, a reduced switch symmetrical multilevel inverter employs MPPT or maximum power point tracking, is determined by perturbation and observation (P&O) for photovoltaic (PV) power generation. Using the MPPT technology, solar PV cells may produce their maximum power output. By utilizing solar radiation and PV cell temperature as input factors, the method establishes the ideal duty cycle in order to guarantee the greatest output regarding DC-DC boost converter. The Proportional Integral Derivative of Fractional Order (FOPID) optimized using SBOA or Secretary Bird Optimization Algorithm is the basis for the method of Selective Harmonic Elimination (SHE) approach utilized by RSS-MLI to eliminate harmonics from the output voltage. Inspired by secretary bird survival behaviour in the wild, SBOA is a metaheuristic algorithm that is based on populations. The SBOA is utilized for optimizing the FOPID controller for selecting the optimum duty cycle of the RSS-MLI for the total harmonic distortion (THD) reduction. Performances are assessed using the proposed method in MATLAB/Simulink implementation and contrasted with traditional methods such as GA-PID, Controllers for WSOA-PID and WSOA-FOPID.
Hybrid RNN RBFN Model for Accurate Diabetes Prediction Using Evolutionary Gravitational Search and Dynamic Incremental Normalization
Diabetes is a chronic metabolic illness marked by high blood glucose\nlevels, which can lead\nto serious complications if not recognized and treated promptly. Diabetes must be predicted\naccurately and early to allow for appropriate intervention and treatment planning. This study\nseeks to create a high performance predictive model for diabetes diagnosis by combining\nadvanced pre-processing, feature selection, and classification techniques. A unique hybrid\nstrategy is proposed, which uses Dynamic Incremental Normalization to effectively\nnormalize the data distribution and the Evolutionary Gravitational Search Algorithm to\ndiscover the most significant features, improving model accuracy and efficiency. A hybrid\nRecurrent Neural Network (RNN) and Radial Basis Function Network (RBFN) model is used\nfor classification, taking use of RN N\'s capacity to capture sequential dependencies and\nRBFN\'s ability to handle nonlinear decision boundaries. The model was tested with the PIMA\nIndian Diabetes Dataset and 10 fold cross validation, yielding an accuracy of 99.5%,\nprecision of 98.9%, recall o f 99.1%, and F1 score of 99.3%.
Comparative Analysis of Machine Learning Models for Chronic Heart Disease Prediction
Researchers examined how machine learning algorithms predict chronic heart
disease (CHD) based on a dataset which included 70,000 patient records. The evaluation took
place following an assessment of three models namel y Logistic Regression, Decision Tree
and Neural Network through multiple performance metrics that included accuracy, precision,
recall, F1 score, AUC ROC and Matthews Correlation Coefficient (MCC). The Logistic
Regression model demonstrated superior perfor mance with 91% accuracy along with an AUC
of 0.78 and earned the best three metrics of precision and recall and F1 score. The accuracy
with Decision Tree reached 85% but it performed poorly in MCC (0.70) and AUC was at
0.64. The Neural Network achieved 88% accuracy yet produced the least AUC score at 0.55
while its MCC reached 0.75. The prediction results verify that Logistic Regression stands as
the best model choice for CHD diagnosis among the tested models.
Enhanced Chronic Heart Disease Prediction Using Feature Engineering and Genetic Algorithm Based Hyperparameter Optimization
The accurate prediction of chronic heart disease represents a critical need for conducting
appropriate interventional actions and individual patient care. The proposed research presents
a supervisory machine learning framework combining cutting edge feature engineering with
hyperparameter optimization to boost prediction capabilities. Mutual Information and Lasso
Regression techniques revealed age and ap_hi and cholesterol as the main predictors which
were chosen for analysis. Introducing Genetic Algorithm based hyperparameter optimization
led Logistic Regression models to achieve an AUC of 0.95 starting from 0.79. The AUC
scores of Decision Tree and Random Forest models increased to 0.86 from 0.83 and to 0.80
from 0.77. The performance evaluation metrics for MCC reached near 0.90 for both Logistic
Regression and Decision Tree. The described techniques demonstrated their capability to
enhance machine learning models for heart disease prediction thus enabling more precise risk
assessment for at risk patients.
3d Vision in Deep Learning Approaches On Assisted Reproductive Technologies of IVF
In vitro fertilization (IVF) is an important assisted reproductive technology (ART) that relies on accurate embryo assessment and follicle tracking to improve success rates. Recent advances in deep learning and 3D visualization have provided promising solutions to automate and improve both embryo and follicle assessment. This study proposes a deep learning framework for IVF that includes pre-processing, segmentation, and classification techniques. Pre-processing includes non-local means (NLM) filtering and normalization to reduce noise while preserving important morphological details in 3D embryo and follicle imaging. This step ensures improved contrast and clarity, which enables better downstream processing. For segmentation, a U-Net-based framework is used to precisely define reproductive structures such as oocytes, embryos, and follicles, which facilitates accurate localization and feature extraction. Segmentation plays a key role in identifying regions of interest and aiding subsequent classification. By focusing on the segmented regions, the R-CNN model distinguishes between viable and non-viable embryos, as well as follicle maturity stages, and automates the grading process with high accuracy. In this approach used as two datasets as 3D ultrasound images and 3D OCT images. The proposed 3D deep learning approach provides an automated, objective, and efficient method for embryo and follicle assessment, which reduces the subjectivity of manual assessment. This study highlights the importance of deep learning-based 3D vision techniques in revolutionizing IVF procedures and advancing reproductive medicine.
Sustainable Energy Algorithm for Women Tracking Based On Iterative Minimization and Detection Techniques
The problems that women and children encounter in their daily lives are more numerous. Basically, there are several problems while going outdoors and coming back, including kidnapping. The most pressing issue facing our nation is the prevalence of crime against women, which has an impact on socioeconomic status, women\'s development, and public safety. According to the crucial scenario, the primary cause of the high crime rate scenario is brutality towards women. The government works to improve the quality of life for citizens and decrease crime in order to combat the issue against women. Even with the development of applications and other forms of contact information, the rate of crime against women continues to rise. This issue is considered to be a major one in our day and age: Eventually, the inference of crime rates at the stranger level is raised. Traditional methods for calculating the crime rates in a certain location. The battery sustainability algorithm used in the women\'s tracking system has been adjusted to use less power and manage battery power. Compared to the current DSR protocol agent, the suggested method greatly lowers consumption of power overhead and enhances message transmission between source and destination.
AI-Powered Radiology: Optimizing Diagnostic Processes and Superior Patient Care
The radiology unit plays a vital role in recent healthcare, aiding as a significant tool for detecting disease/disorders, observing their evolution, and facilitating treatment through numerous imaging procedures. This paper had an attempt to review of the foremost application of AI in the field of radiology, and to consider possible future evolution. This review article synthesizes outcomes from foremost scientific sources like PubMed, Google Scholar, and Science Direct, Springer etc. dazzling the ongoing evolution and prospects and barriers of AI in radiology and its subdivisions. This review article finds that AI assists as a authoritative optimization tool, helping technicians and radiologists to choose custom-made patient protocols, monitor dose constraints, and speculate radiation risks. It also augments the reporting workflow by associating words, imageries, and quantifiable data flawlessly. Hence, the role of AI in radiology sector have developed remarkable in the preceding decade. However, AI in radiology aspects challenges associated to data testing and validation of data, professional acceptance, as well as education and professional training. Despite the challenges, AI affords a prospect for superior novelty in the radiology field with amended accuracy, condensed burden of radiologists and improved patient concern.
Investigating stable Fixed Point Theorems for Weakly Compatible Mappings in Complete Intuitionistic Fuzzy Metric Spaces
Several frequent stable point theorems are formulated using perfect intuitionistic fuzzy metric
(IFM) spaces and organized in this article. The purpose of this article is to validate these
theorems under conditions that increase their future applicability. The concept of reciprocally
continuous mappings functions as a requirement for two functions to stay mutually
continuo us together while weakly compatible mappings need supplementary conditions to
determine fixed points. The findings achieve better strength because the methodology
incorporates associated sequences. The study contributes to intuitionistic fuzzy metric (IFM)
spaces fixed point theory by implementing various mathematical methods. The paper has
specific aims to advance current findings through an exchange of complete metric spaces
with complete IFM spaces.
Enhancing IoT Security: Intelligent System Design with Machine Learning Framework
IoT functions as a system of devices connected through networks where each object possesses distinct identifiers (UIDs) among computing appliances and digital and mechanical items and living entities. The initial connected network known as ARPANET led to the development of IoT which has emerged since 1982 as a fast expanding technological sphere. The world will have more than 27 billion IoT devices active worldwide during the last quarter of 2025. The networked devices exchange information between global networks through automated operations that skip traditional human-to-computer or direct human-to-human contacts. The independent operation of IoT systems depends on human participation for instruction input and data retrieval functions. IoT exists as a system of connected devices which generate and transfer data through an information network that also stores and operates this data. The progress of IoT relies on the significant developments across Cloud Computing and Big Data and Artificial Intelligence sectors. Security challenges have remained steadfast as one of the primary concerns which impact the IoT framework. As IoT devices multiply at lightning speed attackers have launched more cyber-frauds that especially target retail businesses along with manufacturing industry and health care operations and financial establishments. IoT systems exhibit multiple security weaknesses because of default telnet service passwords and unsecured execution zones together with expired software and poor encryption standards as well as insufficient access controls and extensive attack options and inadequate industrial security measures. IDS security along with IoT protection benefits from improvement through Meta-Learning as well as Ensemble Learning and Anomaly Detection while Light Gradient Boosting Machine combines with Fuzzy C Means (FCM) and Particle Swarm Optimization (PSO). These specific algorithms enhance intrusion detection capabilities.