Prof. Nedra Mellouli is a full Professor in computer sciences. Her research interests mainly focus on multimodal big data-driven approaches for time series Big Data analytics, as precipitation monitoring, agriculture of precision, Intelligent systems of irrigation, energy optimization, healthcare decision systems and recommendation systems.
Nédra Mellouli; Sadok Ben Yahia; Aniruddha GOKHALE; Akram Hakiri
A comprehensive survey on digital twin for future networks and emerging Internet of Things industry Article de journal
Dans: Computer Networks, vol. 244, p. 110350, 2024.
@article{mellouli_3027,
title = {A comprehensive survey on digital twin for future networks and emerging Internet of Things industry},
author = {Nédra Mellouli and Sadok Ben Yahia and Aniruddha GOKHALE and Akram Hakiri},
url = {https://www.sciencedirect.com/science/article/pii/S1389128624001828},
year = {2024},
date = {2024-05-01},
journal = {Computer Networks},
volume = {244},
pages = {110350},
abstract = {The rapid growth of industrial digitalization in the Industry 4.0 era is fundamentally transforming the industrial sector by connecting products, machines, and people, offering real-time digital models to allow self-diagnosis, self-optimization and self-configuration. However, this uptake in such a digital transformation faces numerous obstacles. For example, the lack of real-time data feeds to perform custom closed-loop control and realize common, powerful industrial systems, the complexity of traditional tools and their inability in finding effective solutions to industry problems, lack of capabilities to experiment rapidly on innovative ideas, and the absence of continuous real-time interactions between physical objects and their simulation representations along with reliable two-way communications, are key barriers towards the adoption of such a digital transformation. Digital twins hold the promise of improving maintainability and deployability, enabling flexibility, auditability, and responsiveness to changing conditions, allowing continuous learning, monitoring and actuation, and allowing easy integration of new technologies in order to deploy open, scalable and reliable Industrial Internet of Things (IIoT).
A critical understanding of this emerging paradigm is necessary to address the multiple dimensions of challenges in realizing digital twins at scale and create new means to generate knowledge in the industrial IoT. To address these requirements, this paper surveys existing digital twin along software technologies, standardization efforts and the wide range of recent and state-of-the-art digital twin-based projects; presents diverse use cases that can benefit from this emerging technology; followed by an in-depth discussion of the major challenges in this area drawing upon the research status and key trends in Digital Twins.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Akram Hakiri; Aniruddha GOKHALE; Sadok Ben Yahia; Nédra Mellouli
A comprehensive survey on digital twin for future networks and emerging Internet of Things industry Article de journal
Dans: Computer Networks, vol. 244, p. 110350, 2024.
@article{hakiri_3101,
title = {A comprehensive survey on digital twin for future networks and emerging Internet of Things industry},
author = {Akram Hakiri and Aniruddha GOKHALE and Sadok Ben Yahia and Nédra Mellouli},
url = {https://www.sciencedirect.com/science/article/pii/S1389128624001828},
year = {2024},
date = {2024-05-01},
journal = {Computer Networks},
volume = {244},
pages = {110350},
abstract = {The rapid growth of industrial digitalization in the Industry 4.0 era is fundamentally transforming the industrial
sector by connecting products, machines, and people, offering real-time digital models to allow self-diagnosis,
self-optimization and self-configuration. However, this uptake in such a digital transformation faces numerous
obstacles. For example, the lack of real-time data feeds to perform custom closed-loop control and realize
common, powerful industrial systems, the complexity of traditional tools and their inability in finding effective
solutions to industry problems, lack of capabilities to experiment rapidly on innovative ideas, and the absence
of continuous real-time interactions between physical objects and their simulation representations along with
reliable two-way communications, are key barriers towards the adoption of such a digital transformation.
Digital twins hold the promise of improving maintainability and deployability, enabling flexibility, auditability,
and responsiveness to changing conditions, allowing continuous learning, monitoring and actuation, and
allowing easy integration of new technologies in order to deploy open, scalable and reliable Industrial Internet
of Things (IIoT).
A critical understanding of this emerging paradigm is necessary to address the multiple dimensions of
challenges in realizing digital twins at scale and create new means to generate knowledge in the industrial
IoT. To address these requirements, this paper surveys existing digital twin along software technologies,
standardization efforts and the wide range of recent and state-of-the-art digital twin-based projects; presents
diverse use cases that can benefit from this emerging technology; followed by an in-depth discussion of the
major challenges in this area drawing upon the research status and key trends in Digital Twins.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhihui Ren; Yan-Fang Sang; Peng Cui; Deliang Chen; Yichi Zhang; Tongliang Gong; Shao Sun; Nédra Mellouli
Temporal Scaling Characteristics of Sub?Daily Precipitation in Qinghai?Tibet Plateau Article de journal
Dans: Earths Future, vol. 12, no. 3, p. 1-16, 2024.
@article{ren_3111,
title = {Temporal Scaling Characteristics of Sub?Daily Precipitation in Qinghai?Tibet Plateau},
author = {Zhihui Ren and Yan-Fang Sang and Peng Cui and Deliang Chen and Yichi Zhang and Tongliang Gong and Shao Sun and Nédra Mellouli},
url = {https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2024EF004417},
year = {2024},
date = {2024-04-01},
journal = {Earths Future},
volume = {12},
number = {3},
pages = {1-16},
abstract = {The Qinghai?Tibet Plateau (QTP) is highly susceptible to destructive rainstorm hazards and
related natural disasters. However, the lack of sub?daily precipitation observations in this region has hindered
our understanding of rainstorm?related hazards and their societal impacts. To address this data gap, a new
approach is devised to estimate sub?daily precipitation in QTP using daily precipitation data and geographical
information. The approach involves establishing a statistical relationship between daily and sub?daily
precipitation based on data from 102 observation sites. This process results in a set of functions with six
associated parameters. These parameters are then modeled using local geographical and climatic information
through a machine learning algorithm called support vector regression. The results indicated that the temporal
scaling characteristics of sub?daily precipitation can be accurately described using a logarithmic function. The
uncertainty of the estimates is quantified using the coefficient of variance and coefficient of skewness, which are
estimated using a logarithmic and linear curve, respectively. Additionally, the six parameters are found to be
closely linked to geographical conditions, enabling the creation of a 1?km parameters data set. This data set can
be utilized to quantitatively describe the probabilistic distribution and extract key information about maximum
precipitation duration (from 1 to 12 hr). Overall, the findings suggest that the generated parameters data set
holds significant potential for various applications, including risk analysis, forecasting, and early warning for
rainstorm?related natural disasters in QTP. The innovative method developed in this study proves to be an
effective approach for estimating sub?daily precipitation and assessing its uncertainty in ungauged regions},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Nzamba Bignoumba; Nédra Mellouli; Sadok Ben Yahia
A new efficient ALignment-driven Neural Network for Mortality Prediction from Irregular Multivariate Time Series data Article de journal
Dans: Expert Systems With Applications, vol. 238, no. Part E, p. 122148, 2024.
@article{bignoumba_2463,
title = {A new efficient ALignment-driven Neural Network for Mortality Prediction from Irregular Multivariate Time Series data},
author = {Nzamba Bignoumba and Nédra Mellouli and Sadok Ben Yahia},
url = {https://www.sciencedirect.com/science/article/pii/S0957417423026507?via%3Dihub},
year = {2024},
date = {2024-03-01},
journal = {Expert Systems With Applications},
volume = {238},
number = {Part E},
pages = {122148},
abstract = {The irregularity of the time interval between observations in and across the stream is a key factor that leads to a drop in performance when classical machine learning or deep learning models are used for a downstream task requiring multivariate time series. Indeed, irregular multivariate time series not only increase the rate of missing values but also lead to data sparsity, which consequently makes the data almost unleverageable and/or ineffective for models. To tackle this scorching challenge, most of the pioneering approaches apply imputation or interpolation in their core, which might lead to embedding data with noise. To especially address this irregular multivariate time series issue, we introduce, in this paper, a new deep neural network model called ALignment-driven Neural Network. The innovative idea of our model is to transform the irregular multivariate time series into pseudo-aligned (or pseudo-regular) latent values. The latter are shown as a matrix, where the coefficients are the latent values of each feature at user-defined reference time points that are evenly spaced. They are obtained through a duplication process driven by an exponential decay mechanism. The obtained output is then passed to a Recurrent Neural Network model, which is undoubtedly the must-use model for regular time series data. To show that our model added value, we looked at the Intensive Care Unit mortality prediction task. In this unit, the physiological measurements used to make decisions have a problem with time irregularity. Leveraging the publicly available MIMIC-III, we compare the performance of our model to that of flagship models. In addition, we also performed extensive ablation studies to highlight the importance of specific components in our model. Interestingly enough, whenever data is collected 24 and 48 h after a patient's admission, we outperform our pioneering competitors, i.e., +1.1% and +1.5% for the AUC score, +2.3% and for the AUPRC score and +0.6% and +1.7%
for the F1-score.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Bignoumba Nzamba; Nédra Mellouli; Sadok Ben Yahia
Nzamba Bignoumba, Nedra Mellouli, Sadok Ben Yahia: A new efficient ALignment-driven Neural Network for Mortality Prediction from Irregular Multivariate Time Series data Article de journal
Dans: Expert Systems With Applications, vol. Volume 238,, no. Part E 122, p. 20, 2024.
@article{nzamba_3028,
title = {Nzamba Bignoumba, Nedra Mellouli, Sadok Ben Yahia: A new efficient ALignment-driven Neural Network for Mortality Prediction from Irregular Multivariate Time Series data},
author = {Bignoumba Nzamba and Nédra Mellouli and Sadok Ben Yahia},
url = {https://doi.org/10.1016/j.eswa.2023.122148},
year = {2024},
date = {2024-01-01},
journal = {Expert Systems With Applications},
volume = {Volume 238,},
number = {Part E 122},
pages = {20},
abstract = {The irregularity of the time interval between observations in and across the stream is a key factor that leads
to a drop in performance when classical machine learning or deep learning models are used for a downstream
task requiring multivariate time series. Indeed, irregular multivariate time series not only increase the rate of
missing values but also lead to data sparsity, which consequently makes the data almost unleverageable and/or
ineffective for models. To tackle this scorching challenge, most of the pioneering approaches apply imputation
or interpolation in their core, which might lead to embedding data with noise. To especially address this
irregular multivariate time series issue, we introduce, in this paper, a new deep neural network model called
ALignment-driven Neural Network. The innovative idea of our model is to transform the irregular multivariate
time series into pseudo-aligned (or pseudo-regular) latent values. The latter are shown as a matrix, where the
coefficients are the latent values of each feature at user-defined reference time points that are evenly spaced.
They are obtained through a duplication process driven by an exponential decay mechanism. The obtained
output is then passed to a Recurrent Neural Network model, which is undoubtedly the must-use model for
regular time series data. To show that our model added value, we looked at the Intensive Care Unit mortality
prediction task. In this unit, the physiological measurements used to make decisions have a problem with
time irregularity. Leveraging the publicly available MIMIC-III, we compare the performance of our model to
that of flagship models. In addition, we also performed extensive ablation studies to highlight the importance
of specific components in our model. Interestingly enough, whenever data is collected 24 and 48 h after a
patient's admission, we outperform our pioneering competitors, i.e., +1.1% and +1.5% for the AUC score, +2.3%
and +2.4% for the AUPRC score and +0.6% and +1.7% for the F1-score.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hanen Balti; Ali Ben Abbes; Yanfang Sang; Nédra Mellouli; Imed Riadh Faraha
Spatio-temporal Heterogeneous Graph using Multivariate Earth Observation Time Series: Application for drought forecasting Article de journal
Dans: Computers & Geosciences, vol. 179, p. 105435, 2023.
@article{balti_2397,
title = {Spatio-temporal Heterogeneous Graph using Multivariate Earth Observation Time Series: Application for drought forecasting},
author = {Hanen Balti and Ali Ben Abbes and Yanfang Sang and Nédra Mellouli and Imed Riadh Faraha},
url = {https://www.sciencedirect.com/science/article/pii/S0098300423001395?via%3Dihub},
year = {2023},
date = {2023-10-01},
journal = {Computers & Geosciences},
volume = {179},
pages = {105435},
note = {Accurate forecasting is required for the effective risk management of drought disasters. Many machine learning- and deep learning-based models have been proposed for drought forecasting, however, they cannot handle the temporal and/or spatial dependencies in the input data, causing unexpected forecasting results. In order to solve the challenging issue, in this paper we proposed the Heterogeneous Spatio-Temporal Graph (HetSPGraph), for drought forecasting. It includes three major layers: spatial aggregations including inter and intra aggregations, temporal aggregation, and a forecasting network. The main function of HetSPGraph is to learn the dynamic spatiotemporal correlations between the regions and to further predict the drought in different regions, based on which accurate drought forecasting can be achieved. Experimental forecasting results of the Standardized Precipitation Evapotranspiration Index (SPEI) in China indicated that the HetSPGraph model outperformed the traditional baseline methods including the Long Short-Term Memory model (LSTM), Convolutional Neural Network-LSTM (CNN-LSTM), Gated Recurrent Unit (GRU), Spatio-Temporal Graph Convolutional Networks (STGCN) and Geographic-Semantic-Temporal Hypergraph Convolutional Network (GST-HCN). Even for long-term forecasting (12 months), more accurate forecasting results, with the coefficient of determination
higher than 0.89, can also be obtained by HetSPGraph compared to the other three models. The proposed HetSPGraph model has the potential for wider use in forecasting drought and other natural disasters.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Nesrine Waga; Hichem Kallel; Nédra Mellouli
Analytical and Deep Learning Approaches for Solving the Inverse kinematic Problem of a High Degrees of Freedom Robotic Arm Article de journal
Dans: Engineering Applications Of Artificial Intelligence, vol. 123, no. Part B, p. 106301, 2023.
@article{waga_2351,
title = {Analytical and Deep Learning Approaches for Solving the Inverse kinematic Problem of a High Degrees of Freedom Robotic Arm},
author = {Nesrine Waga and Hichem Kallel and Nédra Mellouli},
url = {https://www.sciencedirect.com/science/article/pii/S0952197623004852?via%3Dihub},
year = {2023},
date = {2023-08-01},
journal = {Engineering Applications Of Artificial Intelligence},
volume = {123},
number = {Part B},
pages = {106301},
abstract = {Inverse kinematics is the basis for controlling the motion of robotic manipulators. It defines the required joint variables for the robotic end-effector accurately reach the desired location. Due to the derivation difficulty, computation complexity, singularity problem, and redundancy, analytical Inverse kinematics solutions pose numerous challenges to the operation of many robotic arms, especially for a manipulator with a high degree of freedom. This paper develops different Deep Learning networks for solving the Inverse kinematics problem of six- Degrees of Freedom robotic manipulators. The implemented neural architectures are Artificial Neural Network, Convolutional Neural Network, Long-Short Term Memory, Gated Recurrent Unit, and Bidirectional Long-Short Term Memory. In this context, we associate the proposed results with a specific tuning of Deep Learning network hyper-parameters (number of hidden layers, learning rate, Loss function, optimization algorithm, number of epochs, etc.). The Bidirectional Long-Short Term Memory network outperformed all proposed architectures. To be close as possible to the experimental results, we have included two types of noise in the training data set to validate which of the five proposed neural networks is more efficient. Furthermore, in this study, we compare the performance of analytical and soft computing solutions in generating robots' trajectories. We include this scenario, focusing on the advantage of implementing neural networks in avoiding the singularity problem that can occur using the analytical approach. In addition, we used the RoboDK simulator to show simulation results with real-world meaning. The performance of Deep Learning models depends on the complexity of the posed problem. Moreover, the complexity of the Inverse Kinematics problem is related to the number of Degrees of Freedom. At the end of this work, we evaluate the influence of the complexity of robotic manipulators on the proposed Deep Learning networks' performance. The results show that the implemented Deep Learning mechanisms performed well in reaching the desired pose of the end-effector. The proposed inverse kinematics strategies apply to other manipulators with different numbers of Degrees of Freedom.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Marwa SAID; Karim HOUIDI; Akram Hakiri; Nédra Mellouli; Hella KAFFEL
Performance Evaluation of LoRaWAN Propagation Models for Large-Scale IoT Deployments Proceedings Article
Dans: IEEE, (Ed.): 27th IEEE International Symposium On Real-Time Distributed Computing, p. 6, Tunis, Tunisie, 2024, ISBN: 979-8-3503-7128-4.
@inproceedings{said_3113,
title = {Performance Evaluation of LoRaWAN Propagation Models for Large-Scale IoT Deployments},
author = {Marwa SAID and Karim HOUIDI and Akram Hakiri and Nédra Mellouli and Hella KAFFEL},
editor = {IEEE},
url = {https://isorc.github.io/2024/html/program.html},
issn = {979-8-3503-7128-4},
year = {2024},
date = {2024-05-01},
booktitle = {27th IEEE International Symposium On Real-Time Distributed Computing},
pages = {6},
address = {Tunis, Tunisie},
note = {22-25 mai 2024},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Marius Ortega; Nédra Mellouli; Aurélien Bossard; Christophe Rodrigues
REDIRE : Réduction Extrême de DImension pour le Résumé Extractif Proceedings Article
Dans: 24ème conférence francophone sur l'Extraction et la Gestion des Connaissances, Dijon, France, 2024.
@inproceedings{ortega_2654,
title = {REDIRE : Réduction Extrême de DImension pour le Résumé Extractif},
author = {Marius Ortega and Nédra Mellouli and Aurélien Bossard and Christophe Rodrigues},
url = {https://iutdijon.u-bourgogne.fr/egc2024/articles-acceptes/},
year = {2024},
date = {2024-01-01},
booktitle = {24ème conférence francophone sur l'Extraction et la Gestion des Connaissances},
address = {Dijon, France},
abstract = {This paper presents an unsupervised automatic summarization model capable of extracting the most important sentences from a corpus. To extract sentences in a summary, we use
pre-entrained word embeddings to represent the documents. From this thick cloud of word vectors,
we apply an extreme dimension reduction to identify important words, which we group
by proximity. Sentences are extracted using linear optimization to maximize the information
present in the summary. We evaluate the approach on large documents and present very encouraging initial results.},
note = {22-26/01/2024},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
No posts by this author.
N'hésitez pas à contacter le service des admissions pour tout renseignement complémentaire :