5G and beyond cellular networks (NextG) will support the continuous execution of resource-expensive edge-assisted deep learning (DL) tasks. To this end, Radio Access Network (RAN) resources will need to be carefully "sliced" to satisfy heterogeneous application requirements while minimizing RAN usage. Existing slicing frameworks treat each DL task as equal and inflexibly define the resources to assign to each task, which leads to sub-optimal performance. In this paper, we propose SEM-O-RAN, the first semantic and flexible slicing framework for NextG Open RANs. Our key intuition is that different DL classifiers can tolerate different levels of image compression, due to the semantic nature of the target classes. Therefore, compression can be semantically applied so that the networking load can be minimized. Moreover, flexibility allows SEM-O-RAN to consider multiple edge allocations leading to the same task-related performance, which significantly improves system-wide performance as more tasks can be allocated. First, we mathematically formulate the Semantic Flexible Edge Slicing Problem (SF-ESP), demonstrate that it is NP-hard, and provide an approximation algorithm to solve it efficiently. Then, we evaluate the performance of SEM-O-RAN through extensive numerical analysis with state-of-the-art multi-object detection (YOLOX) and image segmentation (BiSeNet V2), as well as real-world experiments on the Colosseum testbed. Our results show that SEM-O-RAN improves the number of allocated tasks by up to 169% with respect to the state of the art.
@inproceedings{puligheddu2023sem,abbr={Conference},bibtex_show={true},title={SEM-O-RAN: Semantic and Flexible O-RAN Slicing for NextG Edge-Assisted Mobile Systems},author={Puligheddu, Corrado and Ashdown, Jonathan and Chiasserini, Carla Fabiana and Restuccia, Francesco},booktitle={Proc. of IEEE Conference on Computer Communications (INFOCOM), Preprint: https://arxiv.org/abs/2212.11853},year={2023}}
2022
Journal
Split Computing and Early Exiting for Deep Learning Applications: Survey and Research Challenges
Matsubara, Yoshitomo, Levorato, Marco, and Restuccia, Francesco
Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep neural networks (DNNs) to execute complex inference tasks such as image classification and speech recognition, among others. However, continuously executing the entire DNN on mobile devices can quickly deplete their battery. Although task offloading to cloud/edge servers may decrease the mobile device’s computational burden, erratic patterns in channel quality, network, and edge server load can lead to a significant delay in task execution. Recently, approaches based on split computing (SC) have been proposed, where the DNN is split into a head and a tail model, executed respectively on the mobile device and on the edge server. Ultimately, this may reduce bandwidth usage as well as energy consumption. Another approach, called early exiting (EE), trains models to embed multiple “exits” earlier in the architecture, each providing increasingly higher target accuracy. Therefore, the tradeoff between accuracy and delay can be tuned according to the current conditions or application demands. In this article, we provide a comprehensive survey of the state of the art in SC and EE strategies by presenting a comparison of the most relevant approaches. We conclude the article by providing a set of compelling research challenges.
Journal
Toward Polymorphic Internet of Things Receivers Through Real-Time Waveform-Level Deep Learning
Restuccia, Francesco, and Melodia, Tommaso
GetMobile: Mobile Computing and Communications Dec 2022
Wireless systems such as the Internet of Things (IoT) are changing the way we interact with the cyber and the physical world. As IoT systems become more and more pervasive, it is imperative to design wireless protocols that can effectively and efficiently support IoT devices and operations. On the other hand, today’s IoT wireless systems are based on inflexible designs, which makes them inefficient and prone to a variety of wireless attacks. In this paper, we introduce the new notion of a deep learning-based polymorphic IoT receiver, able to reconfigure its waveform demodulation strategy itself in real time, based on the inferred waveform parameters. Our key innovation is the introduction of a novel embedded deep learning architecture that enables the solution of waveform inference problems, which is then integrated into a generalized hardware/software architecture with radio components and signal processing. Our polymorphic wireless receiver is prototyped on a custom-made software-defined radio platform. We show through extensive over-the-air experiments that the system achieves throughput within 87% of a perfect-knowledge Oracle system, thus demonstrating for the first time that polymorphic receivers are feasible.
@article{restuccia2022toward,abbr={Journal},title={Toward Polymorphic Internet of Things Receivers Through Real-Time Waveform-Level Deep Learning},bibtex_show={true},author={Restuccia, Francesco and Melodia, Tommaso},journal={GetMobile: Mobile Computing and Communications},volume={25},html={https://dl.acm.org/doi/abs/10.1145/3511285.3511294?casa_token=ziKh5Pvf8NIAAAAA:OXKzC4Ck_4f47V0oKB1lBQxdXshK7uISr859oeAI_u3BytSo5ZAeeirEoQjXMBmCkaUKS4GkMNQHEQ},number={3},pages={28--33},year={2022},publisher={ACM New York, NY, USA}}
Conference
Terahertz Communications Can Work in Rain and Snow: Impact of Adverse Weather Conditions on Channels at 140 GHz
Sen, Priyangshu, Hall, Jacob, Polese, Michele, Petrov, Vitaly, Bodet, Duschia,
Restuccia, Francesco, Melodia, Tommaso, and Jornet, Josep M.
In Proceedings of the 6th ACM Workshop on Millimeter-Wave and Terahertz Networks and Sensing Systems Dec 2022
Next-generation wireless networks will leverage the spectrum above 100 GHz to enable ultra-high data rate communications over multi-GHz-wide bandwidths. The propagation environment at such high frequencies, however, introduces challenges throughout the whole protocol stack design, from physical layer signal processing to application design. Therefore, it is fundamental to develop a holistic understanding of the channel propagation and fading characteristics over realistic deployment scenarios and ultra-wide bands. In this paper, we conduct an extensive measurement campaign to evaluate the impact of weather conditions on a wireless link in the 130-150 GHz band through a channel sounding campaign with clear weather, rain, and snow in a typical urban backhaul scenario. We present a novel channel sounder design that captures signals with -82 dBm sensitivity and 20 GHz of bandwidth.We analyze link budget, capacity, as well as channel parameters such as the delay spread and the K-factor. Our experimental results indicate that in the considered context the adverse weather does not interrupt the link, but introduces some additional constraints (e.g., high delay spread and increase in path loss in snow conditions) that need to be accounted for in the design of reliable Sixth Generation (6G) communication links above 100 GHz.
@inproceedings{10.1145/3555077.3556470,abbr={Conference},author={Sen, Priyangshu and Hall, Jacob and Polese, Michele and Petrov, Vitaly and Bodet, Duschia and Restuccia, Francesco and Melodia, Tommaso and Jornet, Josep M.},title={Terahertz Communications Can Work in Rain and Snow: Impact of Adverse Weather Conditions on Channels at 140 GHz},year={2022},publisher={Association for Computing Machinery},html={https://dl.acm.org/doi/abs/10.1145/3555077.3556470},doi={10.1145/3555077.3556470},booktitle={Proceedings of the 6th ACM Workshop on Millimeter-Wave and Terahertz Networks and Sensing Systems},pages={13–-18},bibtex_show={true}}
Conference
SDR-LoRa: Dissecting and Implementing LoRa on Software-Defined Radios to Advance Experimental IoT Research
Busacca, Fabio, Mangione, Stefano, Tinnirello, Ilenia, Palazzo, Sergio, and Restuccia, Francesco
In Proceedings of the 16th ACM Workshop on Wireless Network Testbeds, Experimental Evaluation & CHaracterization Dec 2022
In this paper, we present SDR-LoRa, a full-fledged SDR implementation of a LoRa transmitter and receiver. First, we reverse-engineer the LoRa physical layer (PHY) functionalities, including the procedures of packet modulation, demodulation, and preamble detection. Based on this analysis, we develop the first Software Defined Radio (SDR) implementation of the LoRa PHY. Furthermore, we integrate LoRa with an Automatic Repeat Request (ARQ) error detection protocol. SDR-LoRa has been validated on (i) the Colosseum wireless channel emulator; and (ii) a real testbed with USRP radios and commercial-off-the-shelf (COTS) devices. Our experimental results demonstrate that the performance of SDR-LoRa is in line with commercial LoRa systems. We pledge to share the entirety of the SDR-LoRa code.
@inproceedings{10.1145/3556564.3558239,abbr={Conference},bibtex_show={true},author={Busacca, Fabio and Mangione, Stefano and Tinnirello, Ilenia and Palazzo, Sergio and Restuccia, Francesco},title={SDR-LoRa: Dissecting and Implementing LoRa on Software-Defined Radios to Advance Experimental IoT Research},year={2022},isbn={9781450395274},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3556564.3558239},doi={10.1145/3556564.3558239},booktitle={Proceedings of the 16th ACM Workshop on Wireless Network Testbeds, Experimental Evaluation & CHaracterization},pages={24–31},numpages={8},keywords={LoRa, IoT, software defined radio, LPWAN},location={Sydney, NSW, Australia},series={WiNTECH '22}}
Conference
DeepCSI: Rethinking Wi-Fi Radio Fingerprinting Through MU-MIMO CSI Feedback Deep Learning
Meneghello, Francesca, Rossi, Michele, and Restuccia, Francesco
Proc. of International Conference of Distributed Computing Systems (ICDCS) 2022
We present DeepCSI, a novel approach to Wi-Fi radio fingerprinting (RFP) which leverages
standard-compliant beamforming feedback matrices to authenticate MU-MIMO Wi-Fi devices on the move.
By capturing unique imperfections in off-the-shelf radio circuitry, RFP techniques can identify
wireless devices directly at the physical layer, allowing low-latency low-energy cryptography-free
authentication. However, existing Wi-Fi RFP techniques are based on software-defined radio (SDRs),
which may ultimately prevent their widespread adoption. Moreover, it is unclear whether existing
strategies can work in the presence of MU-MIMO transmitters - a key technology in modern Wi-Fi standards.
Conversely from prior work, DeepCSI does not require SDR technologies and can be run on any low-cost
Wi-Fi device to authenticate MU-MIMO transmitters. Our key intuition is that imperfections in the
transmitter’s radio circuitry percolate onto the beamforming feedback matrix, and thus RFP can be
performed without explicit channel state information (CSI) computation. DeepCSI is robust to inter-stream
and inter-user interference being the beamforming feedback not affected by those phenomena.
We extensively evaluate the performance of DeepCSI through a massive data collection campaign performed
in the wild with off-the-shelf equipment, where 10 MU-MIMO Wi-Fi radios emit signals in different positions.
Experimental results indicate that DeepCSI correctly identifies the transmitter with an accuracy of up to 98%.
The identification accuracy remains above 82% when the device moves within the environment. To allow
replicability and provide a performance benchmark, we pledge to share the 800 GB datasets - collected
in static and, for the first time, dynamic conditions - and the code database with the community.
@article{meneghello2022deepcsi,abbr={Conference},title={DeepCSI: Rethinking Wi-Fi Radio Fingerprinting Through MU-MIMO CSI Feedback Deep Learning},author={Meneghello, Francesca and Rossi, Michele and Restuccia, Francesco},journal={Proc. of International Conference of Distributed Computing Systems (ICDCS)},bibtex_show={true},month={},html={https://arxiv.org/abs/2204.07614},year={2022}}
Conference
ChARM: NextG Spectrum Sharing Through Data-Driven Real-Time O-RAN Dynamic Control
Baldesi, Luca,
Restuccia, Francesco, and Melodia, Tommaso
In Proc. of IEEE Conference on Computer Communications (INFOCOM), Best Paper Award 2022
Today’s radio access networks (RANs) are monolithic
entities which often operate statically on a given set of parameters
for the entirety of their operations. To implement realistic and
effective spectrum sharing policies, RANs will need to seamlessly
and intelligently change their operational parameters. In stark
contrast with existing paradigms, the new O-RAN architectures
for 5G-and-beyond networks (NextG) separate the logic that con-
trols the RAN from its hardware substrate, allowing unprece-
dented real-time fine-grained control of RAN components. In
this context, we propose the Channel-Aware Reactive Mechanism
(ChARM), a data-driven O-RAN-compliant framework that allows
(i) sensing the spectrum to infer the presence of interference
and (ii) reacting in real time by switching the distributed unit
(DU) and radio unit (RU) operational parameters according to
a specified spectrum access policy. ChARM is based on neural
networks operating directly on unprocessed I/Q waveforms to
determine the current spectrum context. ChARM does not require
any modification to the existing 3GPP standards. It is designed
to operate within the O-RAN specifications, and can be used in
conjunction with other spectrum sharing mechanisms (e.g., LTE-
U, LTE-LAA or MulteFire). We demonstrate the performance of
ChARM in the context of spectrum sharing among LTE and Wi-
Fi in unlicensed bands, where a controller operating over a RAN
Intelligent Controller (RIC) senses the spectrum and switches
cell frequency to avoid Wi-Fi. We develop a prototype of ChARM
using srsRAN, and leverage the Colosseum channel emulator to
collect a large-scale waveform dataset to train our neural networks
with. To collect standard-compliant Wi-Fi data, we extended the
Colosseum testbed using system-on-chip (SoC) boards running
a modified version of the OpenWiFi architecture. Experimental
results show that ChARM achieves accuracy of up to 96% on
Colosseum and 85% on an over-the-air testbed, demonstrating the
capacity of ChARM to exploit the considered spectrum channels.
@inproceedings{baldesi2022charm,abbr={Conference},bibtex_show={true},author={Baldesi, Luca and Restuccia, Francesco and Melodia, Tommaso},booktitle={Proc. of IEEE Conference on Computer Communications (INFOCOM), Best Paper Award},title={ChARM: NextG Spectrum Sharing Through Data-Driven Real-Time O-RAN Dynamic Control},year={2022},volume={},number={},pages={240-249},doi={10.1109/INFOCOM48880.2022.9796985}}
Conference
SmartDet: Context-Aware Dynamic Control of Edge Task Offloading for Mobile Object Detection
Callegaro, Davide, Levorato, Marco, and Restuccia, Francesco
In Proc. of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM) 2022
Mobile devices increasingly rely on object detec-
tion (OD) through deep neural networks (DNNs) to perform
critical tasks. Due to their high complexity, the execution of
these DNNs requires excessive time and energy. Low-complexity
object tracking (OT) can be used with OD, where the latter is
periodically applied to generate “fresh” references for tracking.
However, the frames processed with OD incur large delays,
which may make the reference outdated and degrade tracking
quality. Herein, we propose to use edge computing in this
context, and establish parallel OT (at the mobile device) and
OD (at the edge server) processes that are resilient to large OD
latency. We propose Katch-Up, a novel tracking mechanism that
improves the system resilience to excessive OD delay. However,
while Katch-Up significantly improves performance, it also
increases the computing load of the mobile device. Hence, we
design SmartDet, a low-complexity controller based on deep
reinforcement learning (DRL) that learns controlling the trade-
off between resource utilization and OD performance. SmartDet
takes as input context-related information related to the current
video content and the current network conditions to optimize
frequency and type of OD offloading, as well as Katch-Up
utilization. We extensively evaluate SmartDet on a real-world
testbed composed of a JetSon Nano as mobile device and a
GTX 980 Ti as edge server, connected through a Wi-Fi link.
Experimental results show that SmartDet achieves an optimal
balance between tracking performance – mean Average Recall
(mAR) and resource usage. With respect to a baseline with full
Katch-Up usage and maximum channel usage, we still increase
mAR by 4% while using 50% less of the channel and 30% power
resources associated with Katch-Up. With respect to a fixed
strategy using minimal resources, we increase mAR by 20% while
using Katch-Up on 1/3 of the frames.
Conference
BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing
Matsubara, Yoshitomo, Callegaro, Davide, Singh, Sameer, Levorato, Marco, and Restuccia, Francesco
In Proc. of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM) 2022
Although mission-critical applications require the
use of deep neural networks (DNNs), their continuous execution
at mobile devices results in a significant increase in energy con-
sumption. While edge offloading can decrease energy consump-
tion, erratic patterns in channel quality, network and edge server
load can lead to severe disruption of the system’s key operations.
An alternative approach, called split computing, generates com-
pressed representations within the model (called “bottlenecks”),
to reduce bandwidth usage and energy consumption. Prior work
has proposed approaches that introduce additional layers, to the
detriment of energy consumption and latency. For this reason,
we propose a new framework called BottleFit, which, in
addition to targeted DNN architecture modifications, includes
a novel training strategy to achieve high accuracy even with
strong compression rates. We apply BottleFit on cutting-edge
DNN models in image classification, and show that BottleFit
achieves 77.1% data compression with up to 0.6% accuracy loss
on ImageNet dataset, while state of the art such as SPINN loses
up to 6% in accuracy. We experimentally measure the power
consumption and latency of an image classification application
running on an NVIDIA Jetson Nano board (GPU-based) and
a Raspberry PI board (GPU-less). We show that BottleFit
decreases power consumption and latency respectively by up to
49% and 89% with respect to (w.r.t.) local computing and by 37%
and 55% w.r.t. edge offloading. We also compare BottleFit
with state-of-the-art autoencoders-based approaches, and show
that (i) BottleFit reduces power consumption and execution
time respectively by up to 54% and 44% on the Jetson and 40%
and 62% on Raspberry PI; (ii) the size of the head model executed
on the mobile device is 83 times smaller. The code repository will
be published for full reproducibility of the results.
@inproceedings{matsubara2022bottlefit,abbr={Conference},title={BottleFit: Learning Compressed Representations in Deep Neural Networks for Effective and Efficient Split Computing},author={Matsubara, Yoshitomo and Callegaro, Davide and Singh, Sameer and Levorato, Marco and Restuccia, Francesco},bibtex_show={true},booktitle={Proc. of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)},volume={},number={},pages={337-346},year={2022},doi={10.1109/WoWMoM54355.2022.00032}}
Conference
ReWiS: Reliable Wi-Fi Sensing Through Few-Shot Multi-Antenna Multi-Receiver CSI Learning
Bahadori, Niloofar, Ashdown, Jonathan, and Restuccia, Francesco
Thanks to the ubiquitousness of Wi-Fi access points
and devices, Wi-Fi sensing enables transformative applications
in remote health care, home/office security, and surveillance, just
to name a few. Existing work has explored the usage of machine
learning (ML) on channel state information (CSI) computed from
Wi-Fi packets to classify events of interest. However, most of
these algorithms require a significant amount of data collection,
as well as extensive computational power for additional CSI
feature extraction. Moreover, the majority of these models suffer
from poor accuracy when tested in a new/untrained environment.
In this paper, we propose ReWiS, a novel framework for
robust and environment-independent Wi-Fi sensing. The key
innovation of ReWiS is to leverage few-shot learning (FSL) as
the inference engine, which (i) reduces the need for extensive
data collection and application-specific feature extraction; (ii)
can rapidly generalize to new tasks by leveraging only a few
new samples. Moreover, ReWiS leverages multi-antenna, multi-
receiver diversity, as well as fine-grained frequency resolution,
to improve the overall robustness of the algorithms. Finally, we
propose a technique based on singular value decomposition (SVD)
to make the FSL input constant irrespective of the number of
receiver antennas. We prototype ReWiS using off-the-shelf Wi-
Fi equipment and showcase its performance by considering a
compelling use case of human activity recognition. Thus, we
perform an extensive data collection campaign in three different
propagation environments with two human subjects. We evaluate
the impact of each diversity component on the performance
and compare ReWiS with a traditional convolutional neural
network (CNN) approach. Experimental results show that ReWiS
improves the performance by about 40% with respect to exist-
ing single-antenna low-resolution approaches. Moreover, when
compared to a CNN-based approach, ReWiS shows 35% more
accuracy and less than 10% drop in accuracy when tested in
different environments, while the CNN drops by more than 45%.
To allow reproducibility of our results and to address the current
dearth of Wi-Fi sensing datasets, we pledge to release our 60 GB
dataset and the entire code repository to the community.
@article{bahadori2022rewis,abbr={Conference},bibtex_show={true},title={ReWiS: Reliable Wi-Fi Sensing Through Few-Shot Multi-Antenna Multi-Receiver CSI Learning},author={Bahadori, Niloofar and Ashdown, Jonathan and Restuccia, Francesco},booktitle={2022 IEEE 23rd International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)},year={2022},volume={},number={},pages={50-59},doi={10.1109/WoWMoM54355.2022.00027}}
2021
Journal
DeepFIR: Channel-Robust Physical-Layer Deep Learning Through Adaptive Waveform Filtering
Restuccia, Francesco, D’Oro, Salvatore, Al-Shawabka, Amani, Rendon, Bruno Costa, Ioannidis, Stratis, and Melodia, Tommaso
Deep learning can be used to classify waveform characteristics ( e.g. , modulation) with accuracy levels that are hardly attainable with traditional techniques. Recent research has demonstrated that one of the most crucial challenges in wireless deep learning is to counteract the channel action, which may significantly alter the waveform features. The problem is further exacerbated by the fact that deep learning algorithms are hardly re-trainable in real time due to their sheer size. This paper proposes DeepFIR , a framework to counteract the channel action in wireless deep learning algorithms without retraining the underlying deep learning model . The key intuition is that through the application of a carefully-optimized digital finite input response filter (FIR) at the transmitter’s side, we can apply tiny modifications to the waveform to strengthen its features according to the current channel conditions. We mathematically formulate the Waveform Optimization Problem (WOP) as the problem of finding the optimum FIR to be used on a waveform to improve the classifier’s accuracy. We also propose a data-driven methodology to train the FIRs directly with dataset inputs. We extensively evaluate DeepFIR on an experimental testbed of 20 software-defined radios, as well as on two datasets made up by 500 ADS-B devices and by 500 WiFi devices and a 24-class modulation dataset. Experimental results show that our approach (i) increases the accuracy of the radio fingerprinting models by about 35%, 50% and 58%; (ii) decreases an adversary’s accuracy by about 54% when trying to imitate other device’s fingerprints by using their filters; (iii) achieves 27% improvement over the state of the art on a 100-device dataset; (iv) increases by 2× the accuracy of the modulation dataset.
@article{restuccia2021deepfir,abbr={Journal},title={DeepFIR: Channel-Robust Physical-Layer Deep Learning Through Adaptive Waveform Filtering},author={Restuccia, Francesco and D’Oro, Salvatore and Al-Shawabka, Amani and Rendon, Bruno Costa and Ioannidis, Stratis and Melodia, Tommaso},journal={IEEE Transactions on Wireless Communications},bibtex_show={true},volume={20},number={12},pages={8054--8066},html={https://ieeexplore.ieee.org/abstract/document/9470953},year={2021},publisher={IEEE}}
Journal
Coordinated 5G Network Slicing: How Constructive Interference Can Boost Network Throughput
D’Oro, Salvatore, Bonati, Leonardo,
Restuccia, Francesco, and Melodia, Tommaso
Radio access network (RAN) slicing is a virtualization technology that partitions radio resources into multiple autonomous virtual networks. Since RAN slicing can be tailored to provide diverse performance requirements, it will be pivotal to achieve the high-throughput and low-latency communications that next-generation (5G) systems have long yearned for. To this end, effective RAN slicing algorithms must (i) partition radio resources so as to leverage coordination among multiple base stations and thus boost network throughput; and (ii) reduce interference across different slices to guarantee slice isolation and avoid performance degradation. The ultimate goal of this paper is to design RAN slicing algorithms that address the above two requirements. First, we show that the RAN slicing problem can be formulated as a 0-1 Quadratic Programming problem, and we prove its NP-hardness. Second, we propose an optimal solution for small-scale 5G network deployments, and we present three approximation algorithms to make the optimization problem tractable when the network size increases. We first analyze the performance of our algorithms through simulations, and then demonstrate their performance through experiments on a standard-compliant LTE testbed with 2 base stations and 6 smartphones. Our results show that not only do our algorithms efficiently partition RAN resources, but also improve network throughput by 27% and increase by 2× the signal-to-interference-plus-noise ratio.
@article{d2021coordinated,abbr={Journal},title={Coordinated 5G Network Slicing: How Constructive Interference Can Boost Network Throughput},author={D’Oro, Salvatore and Bonati, Leonardo and Restuccia, Francesco and Melodia, Tommaso},bibtex_show={true},journal={IEEE/ACM Transactions on Networking},volume={29},number={4},pages={1881--1894},year={2021},html={https://ieeexplore.ieee.org/abstract/document/9411723},publisher={IEEE}}
Journal
The Implantable Internet of Medical Things: Toward Lifelong Remote Monitoring and Treatment of Chronic Diseases
The promise of real-time detection and response to life-crippling diseases brought by the Implantable Internet of Medical Things (IIoMT) has recently spurred substantial advances in implantable technologies. Yet, existing medical devices do not provide at once the miniaturized end-to-end body monitoring, wireless communication and remote powering capabilities to implement IIoMT applications. This paper fills the existing research gap by presenting U-Verse, the first FDA-compliant rechargeable IIoMT platform packing sensing, computation, communication, and recharging circuits into a penny-scale platform. Extensive experimental evaluation indicates that U-Verse (i) can be wirelessly recharged and can store energy several orders of magnitude more than state-of-theart capacity in tens of minutes; (ii) with one single charge, it can operate from few hours to several days. Finally, U-Verse is demonstrated through (i) a closed-loop application that sends data via ultrasounds through real porcine meat; and (ii) a real-time reconfigurable pacemaker.
@article{guida2021implantable,abbr={Journal},title={The Implantable Internet of Medical Things: Toward Lifelong Remote Monitoring and Treatment of Chronic Diseases},author={Guida, Raffaele and Dave, Neil and Restuccia, Francesco and Demirors, Emrecan and Melodia, Tommaso},journal={GetMobile: Mobile Computing and Communications},bibtex_show={true},volume={24},number={3},pages={20--25},year={2021},html={https://dl.acm.org/doi/abs/10.1145/3447853.3447861},publisher={ACM New York, NY, USA}}
Conference
Federated Deep Reinforcement Learning for the Distributed Control of NextG Wireless Networks
Tehrani, Peyman,
Restuccia, Francesco, and Levorato, Marco
In 2021 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN) 2021
Next Generation (NextG) networks are expected to
support demanding tactile internet applications such as aug-
mented reality and connected autonomous vehicles. Whereas
recent innovations bring the promise of larger link capacity,
their sensitivity to the environment and erratic performance
defy traditional model-based control rationales. Zero-touch data-
driven approaches can improve the ability of the network
to adapt to the current operating conditions. Tools such as
reinforcement learning (RL) algorithms can build optimal control
policy solely based on a history of observations. Specifically,
deep RL (DRL), which uses a deep neural network (DNN)
as a predictor, has been shown to achieve good performance
even in complex environments and with high dimensional inputs.
However, the training of DRL models require a large amount of
data, which may limit its adaptability to ever-evolving statistics
of the underlying environment. Moreover, wireless networks
are inherently distributed systems, where centralized DRL ap-
proaches would require excessive data exchange, while fully
distributed approaches may result in slower convergence rates
and performance degradation. In this paper, to address these
challenges, we propose a federated learning (FL) approach to
DRL, which we refer to federated DRL (F-DRL), where base
stations (BS) collaboratively train the embedded DNN by only
sharing models’ weights rather than training data. We evaluate
two distinct versions of F-DRL, value and policy based, and show
the superior performance they achieve compared to distributed
and centralized DRL
@inproceedings{tehrani2021federated,abbr={Conference},title={Federated Deep Reinforcement Learning for the Distributed Control of NextG Wireless Networks},author={Tehrani, Peyman and Restuccia, Francesco and Levorato, Marco},booktitle={2021 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)},bibtex_show={true},pages={248--253},year={2021},html={https://ieeexplore.ieee.org/abstract/document/9677132},organization={IEEE}}
Conference
Colosseum: Large-Scale Wireless Experimentation Through Hardware-in-the-Loop Network Emulation
Colosseum is an open-access and publicly-available
large-scale wireless testbed for experimental research via virtu-
alized and softwarized waveforms and protocol stacks on a fully
programmable, “white-box” platform. Through 256 state-of-the-
art software-defined radios and a massive channel emulator core,
Colosseum can model virtually any scenario, enabling the de-
sign, development and testing of solutions at scale in a variety
of deployments and channel conditions. These Colosseum radio-
frequency scenarios are reproduced through high-fidelity FPGA-
based emulation with finite-impulse response filters. Filters model
the taps of desired wireless channels and apply them to the signals
generated by the radio nodes, faithfully mimicking the conditions
of real-world wireless environments. In this paper, we introduce
Colosseum as a testbed that is for the first time open to the research
community. We describe the architecture of Colosseum and its
experimentation and emulation capabilities. We then demonstrate
the effectiveness of Colosseum for experimental research at scale
through exemplary use cases including prevailing wireless tech-
nologies (e.g., cellular and Wi-Fi) in spectrum sharing and un-
manned aerial vehicle scenarios. A roadmap for Colosseum future
updates concludes the paper.
@inproceedings{bonati2021colosseum,abbr={Conference},title={Colosseum: Large-Scale Wireless Experimentation Through Hardware-in-the-Loop Network Emulation},author={Bonati, Leonardo and Johari, Pedram and Polese, Michele and D’Oro, Salvatore and Mohanti, Subhramoy and Tehrani-Moayyed, Miead and Villa, Davide and Shrivastava, Shweta and Tassie, Chinenye and Yoder, Kurt and others},booktitle={2021 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)},bibtex_show={true},pages={105--113},year={2021},html={https://ieeexplore.ieee.org/abstract/document/9677430},organization={IEEE}}
Conference
SeReMAS: Self-Resilient Mobile Autonomous Systems Through Predictive Edge Computing
Callegaro, Davide, Levorato, Marco, and Restuccia, Francesco
In 2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON) 2021
Edge computing enables Mobile Autonomous Sys-
tems (MASs) to execute continuous streams of heavy-duty
mission-critical processing tasks, such as real-time obstacle detec-
tion and navigation. However, in practical applications, erratic
patterns in channel quality, network load, and edge server load
can interrupt the task flow’s execution, which necessarily leads to
severe disruption of the system’s key operations. Existing work
has mostly tackled the problem with reactive approaches, which
cannot guarantee task-level reliability. Conversely, in this paper
we focus on learning-based predictive edge computing to achieve
self-resilient task offloading. By conducting a preliminary exper-
imental evaluation, we show that there is no dominant feature
that can predict the edge-MAS system reliability, which calls
for an ensemble and selection of weaker features. To tackle the
complexity of the problem, we propose SeReMAS, a data-driven
optimization framework. We first mathematically formulate a
Redundant Task Offloading Problem (RTOP), where a MAS may
connect to multiple edge servers for redundancy, and needs to
select which server(s) to transmit its computing tasks in order
to maximize the probability of task execution while minimizing
channel and edge resource utilization. We then create a predictor
based on Deep Reinforcement Learning (DRL), which produces
the optimum task assignment based on application-, network- and
telemetry-based features. We prototype SeReMAS on a testbed
composed by a Tarot650 quadcopter drone, mounting a PixHawk
flight controller, a Jetson Nano board, and three 802.11n WiFi
interfaces. We extensively evaluate SeReMAS by considering
an application where one drone offloads high-resolution images
for real-time analysis to three edge servers on the ground.
Experimental results show that SeReMAS improves the task
execution probability by 17% with respect to existing reactive-
based approaches. To allow full reproducibility of results, we
share the dataset and code with the research community.
@inproceedings{callegaro2021seremas,abbr={Conference},title={SeReMAS: Self-Resilient Mobile Autonomous Systems Through Predictive Edge Computing},author={Callegaro, Davide and Levorato, Marco and Restuccia, Francesco},booktitle={2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)},bibtex_show={true},pages={1--9},year={2021},html={https://ieeexplore.ieee.org/abstract/document/9491618},organization={IEEE}}
Conference
DeepLoRa: Fingerprinting LoRa Devices at Scale Through Deep Learning and Data Augmentation
In Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing 2021
The Long Range (LoRa) protocol for low-power wide-area networks (LPWANs) is a strong candidate to enable the massive roll-out of the Internet of Things (IoT) because of its low cost, impressive sensitivity (-137dBm), and massive scalability potential. As tens of thousands of tiny LoRa devices are deployed over large geographic areas, a key component to the success of LoRa will be the development of reliable and robust authentication mechanisms. To this end, Radio Frequency Fingerprinting (RFFP) through deep learning (DL) has been heralded as an effective zero-power supplement or alternative to energy-hungry cryptography. Existing work on LoRa RFFP has mostly focused on small-scale testbeds and low-dimensional learning techniques; however, many challenges remain. Key among them are authentication techniques robust to a wide variety of channel variations over time and supporting a vast population of devices.
In this work, we advance the state of the art by presenting (i) the first massive experimental evaluation of DL RFFP and (ii) new data augmentation techniques for LoRa designed to counter the degradation introduced by the wireless channel. Specifically, we collected and publicly shared more than 1TB of waveform data from 100 bit-similar devices (with identical manufacturing processes) over different deployment scenarios (outdoor vs. indoor) and spanning several days. We train and test diverse DL models (convolutional and recurrent neural networks) using either preamble or payload data slices. We compare three different representations of the received signal: (i) IQ, (ii) amplitude-phase, and (iii) spectrogram. Finally, we propose a novel data augmentation technique called DeepLoRa to enhance the LoRa RFFP performance. Results show that (i) training the CNN models with IQ representation is not always the best combo in fingerprinting LoRa radios; training CNNs and RNN-LSTMs with amplitude-phase and spectrogram representations may increase the fingerprinting performance in small and medium-scale testbeds; (ii) using only payload data in the fingerprinting process outperforms preamble only data, and (iii) DeepLoRa data augmentation technique improves the classification accuracy from 19% to 36% in the RFFP challenging case of training on data collected on a different day than the testing data. Moreover, DeepLoRa raises the accuracy from 82% to 91% when training and testing 100 devices with data collected on the same day.
@inproceedings{al2021deeplora,abbr={Conference},title={DeepLoRa: Fingerprinting LoRa Devices at Scale Through Deep Learning and Data Augmentation},author={Al-Shawabka, Amani and Pietraski, Philip and Pattar, Sudhir B and Restuccia, Francesco and Melodia, Tommaso},booktitle={Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing},pages={251--260},bibtex_show={true},html={https://dl.acm.org/doi/abs/10.1145/3466772.3467054},year={2021}}
Conference
The Tags Are Alright: Robust Large-Scale RFID Clone Detection Through Federated Data-Augmented Radio Fingerprinting
Piva, Mauro, Maselli, Gaia, and Restuccia, Francesco
In Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing 2021
Millions of RFID tags are pervasively used all around the globe to inexpensively identify a wide variety of everyday-use objects. One of the key issues of RFID is that tags cannot use energy-hungry cryptography, and thus can be easily cloned. For this reason, radio fingerprinting (RFP) is a compelling approach that leverages the unique imperfections in the tag’s wireless circuitry to achieve large-scale RFID clone detection. Recent work, however, has unveiled that time-varying channel conditions can significantly decrease the accuracy of the RFP process. Prior art in RFID identification does not consider this critical aspect, and instead focuses on custom-tailored feature extraction techniques and data collection with static channel conditions. For this reason, we propose the first large-scale investigation into RFP of RFID tags with dynamic channel conditions. Specifically, we perform a massive data collection campaign on a testbed composed by 200 off-the-shelf identical RFID tags and a software-defined radio (SDR) tag reader. We collect data with different tag-reader distances in an over-the-air configuration. To emulate implanted RFID tags, we also collect data with two different kinds of porcine meat inserted between the tag and the reader. We use this rich dataset to train and test several convolutional neural network (CNN)-based classifiers in a variety of channel conditions. Our investigation reveals that training and testing on different channel conditions drastically degrades the classifier’s accuracy. For this reason, we propose a novel training framework based on federated machine learning (FML) and data augmentation (DAG) to boost the accuracy. Extensive experimental results indicate that (i) our FML approach improves accuracy by up to 48%; (ii) our DAG approach improves the FML performance by up to 19% and the single-dataset performance by 31%. To the best of our knowledge, this is the first paper experimentally demonstrating the efficacy of FML and DAG on a large device population. To allow full replicability, we are sharing with the research community our fully-labeled 200-GB RFID waveform dataset, as well as the entirety of our code and trained models, concurrently with our submission.
@inproceedings{piva2021tags,abbr={Conference},title={The Tags Are Alright: Robust Large-Scale RFID Clone Detection Through Federated Data-Augmented Radio Fingerprinting},author={Piva, Mauro and Maselli, Gaia and Restuccia, Francesco},booktitle={Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing},html={https://dl.acm.org/doi/abs/10.1145/3466772.3467033},pages={41--50},bibtex_show={true},year={2021}}
Conference
DeepBeam: Deep Waveform Learning for Coordination-Free Beam Management in mmWave Networks
Polese, Michele,
Restuccia, Francesco, and Melodia, Tommaso
In Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing 2021
Highly directional millimeter wave (mmWave) radios need to perform beam management to establish and maintain reliable links. To achieve this objective, existing solutions mostly rely on explicit coordination between the transmitter (TX) and the receiver (RX), which significantly reduces the airtime available for communication and further complicates the network protocol design. This paper advances the state of the art by presenting DeepBeam, a framework for beam management that does not require pilot sequences from the TX, nor any beam sweeping or synchronization from the RX. This is achieved by inferring (i) the Angle of Arrival (AoA) of the beam and (ii) the actual beam being used by the transmitter through waveform-level deep learning on ongoing transmissions between the TX to other receivers. In this way, the RX can associate Signal-to-Noise-Ratio (SNR) levels to beams without explicit coordination with the TX. This is possible because different beam patterns introduce different "impairments" to the waveform, which can be subsequently learned by a convolutional neural network (CNN). To demonstrate the generality of DeepBeam, we conduct an extensive experimental data collection campaign where we collect more than 4 TB of mmWave waveforms with (i) 4 phased array antennas at 60.48 GHz, (ii) 2 codebooks containing 24 one-dimensional beams and 12 two-dimensional beams; (iii) 3 receiver gains; (iv) 3 different AoAs; (v) multiple TX and RX locations. Moreover, we collect waveform data with two custom-designed mmWave software-defined radios with fully-digital beamforming architectures at 58 GHz. We also implement our learning models in FPGA to evaluate latency performance. Results show that DeepBeam (i) achieves accuracy of up to 96%, 84% and 77% with a 5-beam, 12-beam and 24-beam codebook, respectively; (ii) reduces latency by up to 7x with respect to the 5G NR initial beam sweep in a default configuration and with a 12-beam codebook. The waveform dataset and the full DeepBeam code repository are publicly available.
@inproceedings{polese2021deepbeam,abbr={Conference},title={DeepBeam: Deep Waveform Learning for Coordination-Free Beam Management in mmWave Networks},author={Polese, Michele and Restuccia, Francesco and Melodia, Tommaso},booktitle={Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing},html={https://dl.acm.org/doi/abs/10.1145/3466772.3467035},pages={61--70},bibtex_show={true},year={2021}}
Conference
A Blockchain Definition to Clarify its Role for the Internet of Things
Ghiro, Lorenzo,
Restuccia, Francesco, D’Oro, Salvatore, Basagni, Stefano, Melodia, Tommaso, Maccari, Leonardo, and Cigno, Renato Lo
In 2021 19th Mediterranean Communication and Computer Networking Conference (MedComNet) 2021
The term blockchain is used for disparate projects, ranging from cryptocurrencies to applications for the Internet of Things (IoT). The concept of blockchain appears therefore blurred, as the same technology cannot empower applications with extremely different requirements, levels of security and performance. This position paper elaborates on the theory of distributed systems to advance a clear definition of blockchain allowing us to clarify its possible role in the IoT. The definition binds together three elements that, as a whole, delineate those unique features that distinguish the blockchain from other distributed ledger technologies: immutability, transparency and anonymity. We note that immutability-which is imperative for securing blockchains-imposes remarkable resource consumption. Moreover, while transparency demands no confidentiality, anonymity enhances privacy but prevents user identification. As such, we raise the concern that these blockchain features clash with the requirements of most IoT applications where devices are power-constrained, data needs to be kept confidential, and users to be clearly identifiable. We consequently downplay the role of the blockchain for the IoT: this can act as a ledger external to the IoT architecture, invoked as seldom as possible and only to record the aggregate results of myriads of local (IoT) transactions that are most of the time performed off-chain to meet performance and scalability requirements.
@inproceedings{ghiro2021blockchain,abbr={Conference},title={A Blockchain Definition to Clarify its Role for the Internet of Things},author={Ghiro, Lorenzo and Restuccia, Francesco and D'Oro, Salvatore and Basagni, Stefano and Melodia, Tommaso and Maccari, Leonardo and Cigno, Renato Lo},booktitle={2021 19th Mediterranean Communication and Computer Networking Conference (MedComNet)},pages={1--8},bibtex_show={true},year={2021},html={https://ieeexplore.ieee.org/abstract/document/9501280},organization={IEEE}}
Conference
Deepsense: Fast Wideband Spectrum Sensing Through Real-Time In-the-Loop Deep Learning
Uvaydov, Daniel, D’Oro, Salvatore,
Restuccia, Francesco, and Melodia, Tommaso
In IEEE INFOCOM 2021-IEEE Conference on Computer Communications 2021
Spectrum sharing will be a key technology to tackle
spectrum scarcity in the sub-6 GHz bands. To fairly access the
shared bandwidth, wireless users will necessarily need to quickly
sense large portions of spectrum and opportunistically access
unutilized bands. The key unaddressed challenges of spectrum
sensing are that (i) it has to be performed with extremely low
latency over large bandwidths to detect tiny spectrum holes and
to guarantee strict real-time digital signal processing (DSP) con-
straints; (ii) its underlying algorithms need to be extremely accu-
rate, and flexible enough to work with different wireless bands
and protocols to find application in real-world settings. To the
best of our knowledge, the literature lacks spectrum sensing tech-
niques able to accomplish both requirements. In this paper, we
propose DeepSense, a software/hardware framework for real-time
wideband spectrum sensing that relies on real-time deep learn-
ing tightly integrated into the transceiver’s baseband processing
logic to detect and exploit unutilized spectrum bands. DeepSense
uses a convolutional neural network (CNN) implemented in the
wireless platform’s hardware fabric to analyze a small portion of
the unprocessed baseband waveform to automatically extract the
maximum amount of information with the least amount of I/Q
samples. We extensively validate the accuracy, latency and gen-
erality performance of DeepSense with (i) a 400 GB dataset con-
taining hundreds of thousands of WiFi transmissions collected “in
the wild” with different Signal-to-Noise-Ratio (SNR) conditions
and over different days; (ii) a dataset of transmissions collected
using our own software-defined radio testbed; and (iii) a synthetic
dataset of LTE transmissions under controlled SNR conditions.
We also measure the real-time latency of the CNNs trained on
the three datasets with an FPGA implementation, and compare
our approach with a fixed energy threshold mechanism. Results
show that our learning-based approach can deliver a precision
and recall of 98% and 97% respectively and a latency as low
as 0.61ms. For reproducibility and benchmarking purposes, we
pledge to share the code and the datasets used in this paper to
the community.
@inproceedings{uvaydov2021deepsense,abbr={Conference},title={Deepsense: Fast Wideband Spectrum Sensing Through Real-Time In-the-Loop Deep Learning},author={Uvaydov, Daniel and D’Oro, Salvatore and Restuccia, Francesco and Melodia, Tommaso},booktitle={IEEE INFOCOM 2021-IEEE Conference on Computer Communications},pages={1--10},bibtex_show={true},html={https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9488764},year={2021},organization={IEEE}}
Conference
Can You Fix My Neural Network? Real-Time Adaptive Waveform Synthesis for Resilient Wireless Signal Classification
D’Oro, Salvatore,
Restuccia, Francesco, and Melodia, Tommaso
In IEEE INFOCOM 2021-IEEE Conference on Computer Communications 2021
Due to the sheer scale of the Internet of Things (IoT)
and 5G, the wireless spectrum is becoming severely congested.
For this reason, wireless devices will need to continuously adapt
to current spectrum conditions by changing their communication
parameters in real-time. Therefore, wireless signal classification
(WSC) will become a compelling necessity to decode fast-changing
signals from dynamic transmitters. Thanks to its capability of
classifying complex phenomena without explicit mathematical
modeling, deep learning (DL) has been demonstrated to be a key
enabler of WSC. Although DL can achieve a very high accuracy
under certain conditions, recent research has unveiled that the
wireless channel can disrupt the features learned by the DL model
during training, thus drastically reducing the classification per-
formance in real-world live settings. Since retraining classifiers
is cumbersome after deployment, existing work has leveraged
the usage of carefully-tailored Finite Impulse Response (FIR)
filters that, when applied at the transmitter’s side, can restore
the features that are lost because of the the channel actions,
i.e., waveform synthesis. However, these approaches compute FIRs
using offline optimization strategies, which limits their efficacy
in highly-dynamic channel settings. In this paper, we improve
the state of the art by proposing Chares, a Deep Reinforcement
Learning (DRL)-based framework for channel-resilient adaptive
waveform synthesis. Chares adapts to new and unseen channel
conditions by optimally computing through DRL the FIRs in real
time. Chares is a DRL agent whose architecture is based upon
the Twin Delayed Deep Deterministic Policy Gradients (TD3),
which requires minimal feedback from the receiver and explores a
continuous action space for best performance. Chares has been ex-
tensively evaluated on two well-known datasets with an extensive
number of channels. We have also evaluated the real-time latency
of Chares with an implementation on field-programmable gate
array (FPGA). Results show that Chares increases the accuracy
up to 4.1x when no waveform synthesis is performed, by 1.9x with
respect to existing work, and can compute new actions within 41μs.
@inproceedings{d2021can,abbr={Conference},html={https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9488865},title={Can You Fix My Neural Network? Real-Time Adaptive Waveform Synthesis for Resilient Wireless Signal Classification},author={D’Oro, Salvatore and Restuccia, Francesco and Melodia, Tommaso},booktitle={IEEE INFOCOM 2021-IEEE Conference on Computer Communications},pages={1--10},bibtex_show={true},year={2021},organization={IEEE}}
Conference
SteaLTE: Private 5G Cellular Connectivity as a Service with Full-stack Wireless Steganography
Fifth-generation (5G) systems will extensively employ radio access network (RAN) softwarization. This key innovation enables the instantiation of "virtual cellular networks" running on different slices of the shared physical infrastructure. In this paper, we propose the concept of Private Cellular Connectivity as a Service (PCCaaS), where infrastructure providers deploy covert network slices known only to a subset of users. We then present SteaLTE as the first realization of a PCCaaS-enabling system for cellular networks. At its core, SteaLTE utilizes wireless steganography to disguise data as noise to adversarial receivers. Differently from previous work, however, it takes a full-stack approach to steganography, contributing an LTE-compliant stegano-graphic protocol stack for PCCaaS-based communications, and packet schedulers and operations to embed covert data streams on top of traditional cellular traffic (primary traffic). SteaLTE balances undetectability and performance by mimicking channel impairments so that covert data waveforms are almost indistinguishable from noise. We evaluate the performance of SteaLTE on an indoor LTE-compliant testbed under different traffic profiles, distance and mobility patterns. We further test it on the outdoor PAWR POWDER platform over long-range cellular links. Results show that in most experiments SteaLTE imposes little loss of primary traffic throughput in presence of covert data transmissions (<; 6%), making it suitable for undetectable PCCaaS networking.
@inproceedings{bonati2021stealte,abbr={Conference},title={SteaLTE: Private 5G Cellular Connectivity as a Service with Full-stack Wireless Steganography},author={Bonati, Leonardo and D’Oro, Salvatore and Restuccia, Francesco and Basagni, Stefano and Melodia, Tommaso},booktitle={IEEE INFOCOM 2021-IEEE Conference on Computer Communications},pages={1--10},bibtex_show={true},year={2021},html={https://ieeexplore.ieee.org/abstract/document/9488889},organization={IEEE}}
Wi-Fi is among the most successful wireless technologies ever invented. As Wi-Fi becomes more and more present in public and private spaces, it becomes natural to leverage its ubiquitousness to implement groundbreaking wireless sensing applications such as human presence detection, activity recognition, and object tracking, just to name a few. This paper reports ongoing efforts by the IEEE 802.11bf Task Group (TGbf), which is defining the appropriate modifications to existing Wi-Fi standards to enhance sensing capabilities through 802.11-compliant waveforms. We summarize objectives and timeline of TGbf, and discuss some of the most interesting proposed technical features discussed so far. We also introduce a roadmap of research challenges pertaining to Wi-Fi sensing and its integration with future Wi-Fi technologies and emerging spectrum bands, hoping to elicit further activities by both the research community and TGbf.
The use of the term blockchain is documented for disparate projects, from cryptocurrencies to applications for the Internet of Things (IoT), and many more. The concept of blockchain appears therefore blurred, as it is hard to believe that the same technology can empower applications that have extremely different requirements and exhibit dissimilar performance and security. This position paper elaborates on the theory of distributed systems to advance a clear definition of blockchain that allows us to clarify its role in the IoT. This definition inextricably binds together three elements that, as a whole, provide the blockchain with those unique features that distinguish it from other distributed ledger technologies: immutability, transparency and anonimity. We note however that immutability comes at the expense of remarkable resource consumption, transparency demands no confidentiality and anonymity prevents user identification and registration. This is in stark contrast to the requirements of most IoT applications that are made up of resource constrained devices, whose data need to be kept confidential and users to be clearly known. Building on the proposed definition, we derive new guidelines for selecting the proper distributed ledger technology depending on application requirements and trust models, identifying common pitfalls leading to improper applications of the blockchain. We finally indicate a feasible role of the blockchain for the IoT: myriads of local, IoT transactions can be aggregated off-chain and then be successfully recorded on an external blockchain as a means of public accountability when required.
@article{ghiro2021blockchaio,abbr={Preprint },title={What is a Blockchain? A Definition to Clarify the Role of the Blockchain in the Internet of Things},author={Ghiro, Lorenzo and Restuccia, Francesco and D'Oro, Salvatore and Basagni, Stefano and Melodia, Tommaso and Maccari, Leonardo and Cigno, Renato Lo},journal={arXiv preprint arXiv:2102.03750},bibtex_show={true},html={https://arxiv.org/abs/2102.03750},year={2021}}
2020
Journal
Massive-Scale I/Q Datasets for WiFi Radio Fingerprinting
Al-Shawabka, Amani,
Restuccia, Francesco, D’Oro, Salvatore, and Melodia, Tommaso
Recent research has proved the effectiveness of neural networks (NNs) in “fingerprinting” (i.e., identifying) wireless radios, by determining the hardware impairments emitted from the transmitter during the waveform transmission process. The artificial neurons of the NN layers are employed to identify and track the radios’ unique impairments by training a large amount of raw data released from these radios. Today, the radio fingerprinting field lacks such a large-scale waveform database that can provide a standard benchmark for researchers working on this field. In this paper, we publicly share 2TB of IEEE 802.11 a/g (WiFi) data obtained from 20 bit-similar Software-Defined-Radios (SDRs).
@article{al2020massive,abbr={Journal},title={Massive-Scale I/Q Datasets for WiFi Radio Fingerprinting},author={Al-Shawabka, Amani and Restuccia, Francesco and D’Oro, Salvatore and Melodia, Tommaso},journal={Computer Networks},volume={182},pages={107566},html={https://www.sciencedirect.com/science/article/pii/S1389128620312123},bibtex_show={true},year={2020},publisher={Elsevier}}
Journal
Arena: A 64-Antenna SDR-Based Ceiling Grid Testing Platform for Sub-6 GHz 5G-and-Beyond Radio Spectrum Research
Arena is an open-access wireless testing platform based on a grid of antennas mounted on the ceiling of a large
office-space environment. Each antenna is connected to programmable software-defined radios (SDR) enabling
sub-6 GHz 5G-and-beyond spectrum research. With 12 computational servers, 24 SDRs synchronized at the
symbol level, and a total of 64 antennas, Arena provides the computational power and the scale to foster new
technology development in some of the most crowded spectrum bands. Arena is based on a three-tier design,
where the servers and the SDRs are housed in a double rack in a dedicated room, while the antennas are hung off
the ceiling of a 2240 square feet office space and cabled to the radios through 100 ft-long cables. This ensures a
reconfigurable, scalable, and repeatable real-time experimental evaluation in a real wireless indoor environment.
In this paper, we introduce the architecture, capabilities, and system design choices of Arena, and provides
details of the software and hardware implementation of various testbed components. Furthermore, we describe
key capabilities by providing examples of published work that employed Arena for applications as diverse as
synchronized MIMO transmission schemes, multi-hop ad hoc networking, multi-cell 5G networks, AI-powered
Radio-Frequency fingerprinting, secure wireless communications, and spectrum sensing for cognitive radio.
@article{bertizzolo2020arena,abbr={Journal},title={Arena: A 64-Antenna SDR-Based Ceiling Grid Testing Platform for Sub-6 GHz 5G-and-Beyond Radio Spectrum Research},author={Bertizzolo, Lorenzo and Bonati, Leonardo and Demirors, Emrecan and Al-Shawabka, Amani and D’Oro, Salvatore and Restuccia, Francesco and Melodia, Tommaso},journal={Computer Networks},volume={181},pages={107436},bibtex_show={true},year={2020},html={https://www.sciencedirect.com/science/article/pii/S1389128620311257},publisher={Elsevier}}
Journal
Deep Learning at the Physical Layer: System Challenges and Applications to 5G and Beyond
The unprecedented requirements of IoT have made fine-grained optimization of spectrum resources an urgent necessity. Thus, designing techniques able to extract knowledge from the spectrum in real time and select the optimal spectrum access strategy accordingly has become more important than ever. Moreover, 5G networks will require complex management schemes to deal with problems such as adaptive beam management and rate selection. Although deep learning (DL) has been successful in modeling complex phenomena, commercially available wireless devices are still very far from actually adopting learning-based techniques to optimize their spectrum usage. In this article, we first discuss the need for real-time DL at the physical layer, and then summarize the current state of the art and existing limitations. We conclude the article by discussing an agenda of research challenges and how DL can be applied to address crucial problems in 5G and beyond networks.
@article{restuccia2020deep,abbr={Journal},title={Deep Learning at the Physical Layer: System Challenges and Applications to 5G and Beyond},author={Restuccia, Francesco and Melodia, Tommaso},journal={IEEE Communications Magazine},volume={58},number={10},pages={58--64},bibtex_show={true},year={2020},html={https://ieeexplore.ieee.org/abstract/document/9247524},publisher={IEEE}}
Conference
Comparative Performance Evaluation of mmWave 5G NR and LTE in a Campus Scenario
Moayyed, Miead Tehrani,
Restuccia, Francesco, and Basagni, Stefano
In 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall) 2020
The extremely high data rates provided by communications in the millimeter-length (mmWave) frequency bands can help address the unprecedented demands of next-generation wireless communications. However, atmospheric attenuation and high propagation loss severely limit the coverage of mmWave networks. To overcome these challenges, multi-input-multi-output (MIMO) provides beamforming capabilities and high-gain steerable antennas to expand communication coverage at mmWave frequencies. The main contribution of this paper is the performance evaluation of mmWave communications on top of the recently released NR standard for 5G cellular networks. Furthermore, we compare the performance of NR with the 4G long-term evolution (LTE) standard on a highly realistic campus environment. We consider physical layer constraints such as transmit power, ambient noise, receiver noise figure, and practical antenna gain in both cases, and examine bitrate and area coverage as the criteria to benchmark the performance. We also show the impact of MIMO technology to improve the performance of the 5G NR cellular network. Our evaluation demonstrates that 5G NR provides on average 6.7 times bitrate improvement without remarkable coverage degradation.
@inproceedings{moayyed2020comparative,abbr={Conference},title={Comparative Performance Evaluation of mmWave 5G NR and LTE in a Campus Scenario},author={Moayyed, Miead Tehrani and Restuccia, Francesco and Basagni, Stefano},booktitle={2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall)},html={https://ieeexplore.ieee.org/abstract/document/9348727},pages={1--5},bibtex_show={true},year={2020},organization={IEEE}}
Conference
HyBloSE: Hybrid Blockchain for Secure-by-Design Smart Environments
Maselli, Gaia, Piva, Mauro, and Restuccia, Francesco
In Proceedings of the 3rd Workshop on Cryptocurrencies and Blockchains for Distributed Systems 2020
Although smart environments are a key component of the Internet of Things (IoT), it is also clear that billions connected doors, washing machines, ovens and others will ultimately raise security and privacy concerns. Early work in this area, as well as most of commercial solutions, has adopted a centralized client/server approach, neglecting the multitude of risks that are induced by an unfair control of the server side. This has made the adoption of a decentralized and trust-less framework quintessential to guarantee devices security. Nevertheless, decentralized proposals are hardly applicable due to costs, slowness and privacy issues. In this paper, we make the use of blockchain practical for smart environments by designing HyBloSE, a secure-by-design and lightweight blockchain-based framework, able to run on low-power devices without additional hardware. HyBloSE is built by using Delegated Proof of Authority and a Moving Window Blockchain. We evaluate HyBloSE through a network emulator and real experiments with different Raspberry Pi platforms. Results show that HyBloSE guarantees a higher security level in terms of resiliency to internal and external attacks compared to centralized solutions, with overhead below 0.38s per operation and less than $4 per month for unlimited operations. Furthermore, we show how Proof of Authority is more adapt then Proof of Work in IoT private scenarios.
@inproceedings{maselli2020hyblose,abbr={Conference},title={HyBloSE: Hybrid Blockchain for Secure-by-Design Smart Environments},author={Maselli, Gaia and Piva, Mauro and Restuccia, Francesco},booktitle={Proceedings of the 3rd Workshop on Cryptocurrencies and Blockchains for Distributed Systems},html={https://dl.acm.org/doi/abs/10.1145/3410699.3413793},pages={23--28},bibtex_show={true},year={2020}}
Conference
Generalized Wireless Adversarial Deep Learning
Restuccia, Francesco, D’Oro, Salvatore, Al-Shawabka, Amani, Rendon, Bruno Costa, Chowdhury, Kaushik, Ioannidis, Stratis, and Melodia, Tommaso
In Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning 2020
Deep learning techniques can classify spectrum phenomena (e.g., waveform modulation) with accuracy levels that were once thought impossible. Although we have recently seen many advances in this field, extensive work in computer vision has demonstrated that an adversary can "crack" a classifier by designing inputs that "steer" the classifier away from the ground truth. This paper advances the state of the art by proposing a generalized analysis and evaluation of adversarial machine learning (AML) attacks to deep learning systems in the wireless domain. We postulate a series of adversarial attacks, and formulate a Generalized Wireless Adversarial Machine Learning Problem (GWAP) where we analyze the combined effect of the wireless channel and the adversarial waveform on the efficacy of the attacks. We extensively evaluate the performance of our attacks on a state-of-the-art 1,000-device radio fingerprinting dataset, and a 24-class modulation dataset. Results show that our algorithms can decrease the classifiers’ accuracy up to 3x while keeping the waveform distortion to a minimum.
@inproceedings{restuccia2020generalized,abbr={Conference},title={Generalized Wireless Adversarial Deep Learning},author={Restuccia, Francesco and D'Oro, Salvatore and Al-Shawabka, Amani and Rendon, Bruno Costa and Chowdhury, Kaushik and Ioannidis, Stratis and Melodia, Tommaso},html={https://dl.acm.org/doi/abs/10.1145/3395352.3402625},booktitle={Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning},pages={49--54},bibtex_show={true},year={2020}}
Conference
PolymoRF: Polymorphic Wireless Receivers Through Physical-Layer Deep Learning
Restuccia, Francesco, and Melodia, Tommaso
In Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing 2020
Today’s wireless technologies are largely based on inflexible designs, which makes them inefficient and prone to a variety of wireless attacks. To address this key issue, wireless receivers will need to (i) infer on-the-fly the physical-layer parameters currently used by transmitters; and if needed, (ii) change their hardware and software structures to demodulate the incoming waveform. In this paper, we introduce PolymoRF, a deep learning-based polymorphic receiver able to reconfigure itself in real time based on the inferred waveform parameters. Our key technical innovations are (i) a novel embedded deep learning architecture, called RFNet, which enables the solution of key waveform inference problems; (ii) a generalized hardware/software architecture that integrates RFNet with radio components and signal processing. We prototype PolymoRF on a custom software-defined radio platform, and show through extensive over-the-air experiments that PolymoRF achieves throughput within 87% of a perfect-knowledge Oracle system, thus demonstrating for the first time that polymorphic receivers are feasible.
@inproceedings{restuccia2020polymorf,abbr={Conference},title={PolymoRF: Polymorphic Wireless Receivers Through Physical-Layer Deep Learning},author={Restuccia, Francesco and Melodia, Tommaso},booktitle={Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing},html={https://dl.acm.org/doi/abs/10.1145/3397166.3409132},pages={271--280},bibtex_show={true},year={2020}}
In Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing 2020
Network slicing of multi-access edge computing (MEC) resources is expected to be a pivotal technology to the success of 5G networks and beyond. The key challenge that sets MEC slicing apart from traditional resource allocation problems is that edge nodes depend on tightly-intertwined and strictly-constrained networking, computation and storage resources. Therefore, instantiating MEC slices without incurring in resource over-provisioning is hardly addressable with existing slicing algorithms. The main innovation of this paper is Sl-EDGE, a unified MEC slicing framework that allows network operators to instantiate heterogeneous slice services (e.g., video streaming, caching, 5G network access) on edge devices. We first describe the architecture and operations of Sl-EDGE, and then show that the problem of optimally instantiating joint network-MEC slices is NP-hard. Thus, we propose near-optimal algorithms that leverage key similarities among edge nodes and resource virtualization to instantiate heterogeneous slices 7.5x faster and within 25% of the optimum. We first assess the performance of our algorithms through extensive numerical analysis, and show that Sl-EDGE instantiates slices 6x more efficiently then state-of-the-art MEC slicing algorithms. Furthermore, experimental results on a 24-radio testbed with 9 smartphones demonstrate that Sl-EDGE provides simultaneously highly-efficient slicing of joint LTE connectivity, video streaming over WiFi, and ffmpeg video transcoding.
@inproceedings{d2020sl,abbr={Conference},title={Sl-EDGE: Network Slicing at the Edge},author={D'Oro, Salvatore and Bonati, Leonardo and Restuccia, Francesco and Polese, Michele and Zorzi, Michele and Melodia, Tommaso},booktitle={Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing},html={https://dl.acm.org/doi/abs/10.1145/3397166.3409133},pages={1--10},bibtex_show={true},year={2020}}
Conference
DeepWiERL: Bringing Deep Reinforcement Learning to the Internet of Self-Adaptive Things
Restuccia, Francesco, and Melodia, Tommaso
In IEEE INFOCOM 2020-IEEE Conference on Computer Communications 2020
Recent work has demonstrated that cutting-edge advances in deep reinforcement learning (DRL) may be leveraged to empower wireless devices with the much-needed ability to "sense" current spectrum and network conditions and "react" in real time by either exploiting known optimal actions or exploring new actions. Yet, understanding whether real-time DRL can be at all applied in the resource-challenged embedded IoT domain, as well as designing IoT-tailored DRL systems and architectures, still remains mostly uncharted territory. This paper bridges the existing gap between the extensive theoretical research on wireless DRL and its system-level applications by presenting Deep Wireless Embedded Reinforcement Learning (DeepWiERL), a general-purpose, hybrid software/hardware DRL framework specifically tailored for embedded IoT wireless devices. DeepWiERL provides abstractions, circuits, software structures and drivers to support the training and real-time execution of state-of-the-art DRL algorithms on the device’s hardware. Moreover, DeepWiERL includes a novel supervised DRL model selection and bootstrap (S-DMSB) technique that leverages transfer learning and high-level synthesis (HLS) circuit design to orchestrate a neural network architecture that satisfies hardware and application throughput constraints and speeds up the DRL algorithm convergence. Experimental evaluation on a fully-custom software-defined radio testbed (i) proves for the first time the feasibility of real-time DRL-based algorithms on a real-world wireless platform with multiple channel conditions; (ii) shows that DeepWiERL supports 16x data rate and consumes 14x less energy than a software-based implementation, and (iii) indicates that S-DMSB may improve the DRL convergence time by 6x and increase the obtained reward by 45% if prior channel knowledge is available.
@inproceedings{restuccia2020deepwierl,abbr={Conference},title={DeepWiERL: Bringing Deep Reinforcement Learning to the Internet of Self-Adaptive Things},author={Restuccia, Francesco and Melodia, Tommaso},booktitle={IEEE INFOCOM 2020-IEEE Conference on Computer Communications},pages={844--853},bibtex_show={true},year={2020},html={https://ieeexplore.ieee.org/abstract/document/9155461},organization={IEEE}}
Conference
Exposing the Fingerprint: Dissecting the Impact of the Wireless Channel on Radio Fingerprinting
Radio fingerprinting uniquely identifies wireless de-
vices by leveraging tiny hardware-level imperfections inevitably
present in off-the-shelf radio circuitry. This way, devices can
be directly identified at the physical layer by analyzing the
unprocessed received waveform – thus avoiding energy-expensive
upper-layer cryptography that resource-challenged embedded
devices may not be able to afford. Recent advances have proven
that convolutional neural networks (CNNs) – thanks to their
multidimensional mappings – can achieve fingerprinting accu-
racy levels impossible to achieve by traditional low-dimensional
algorithms. The same research, however, has also suggested that
the wireless channel may negatively impact the accuracy of CNN-
based radio fingerprinting algorithms by making device-unique
hardware imperfections much harder to recognize.
In spite of the growing interest in radio fingerprinting research
by academia and DARPA, the wireless research community still
lacks (i) a large-scale open dataset for radio fingerprinting col-
lected in diverse environments and rich, diverse, channel condi-
tions; and (ii) a full-fledged, systematic, quantitative investigation
of the impact of the wireless channel on the accuracy of CNN-
based radio fingerprinting algorithms. The key contribution of
this paper is to bridge this gap by (i) collecting and sharing with
the community more than 7TB of wireless data obtained from 20
wireless devices with identical RF circuitry (and thus, worst-case
scenario for fingerprinting) over the course of several days in (a)
an anechoic chamber, (b) in-the-wild testbed, and (c) with cable
connections; and (ii) providing a first-of-its-kind evaluation of
the impact of the wireless channel on CNN-based fingerprinting
algorithms through (a) the 7TB experimental dataset and (b)
a 400GB dataset provided by DARPA containing hundreds of
thousands of transmissions from thousands of WiFi and ADS-
B devices with different SNR conditions. Experimental results
conclude that (i) the wireless channel impacts the classification
accuracy significantly, i.e., from 85% to 9% and from 30%
to 17% in the experimental and DARPA dataset, respectively;
and that (ii) equalizing I/Q data can increase the accuracy to a
significant extent (i.e., by up to 23%) when the number of devices
increases significantly.
@inproceedings{al2020exposing,abbr={Conference},title={Exposing the Fingerprint: Dissecting the Impact of the Wireless Channel on Radio Fingerprinting},author={Al-Shawabka, Amani and Restuccia, Francesco and D’Oro, Salvatore and Jian, Tong and Rendon, Bruno Costa and Soltani, Nasim and Dy, Jennifer and Ioannidis, Stratis and Chowdhury, Kaushik and Melodia, Tommaso},booktitle={IEEE INFOCOM 2020-IEEE Conference on Computer Communications},html={https://ieeexplore.ieee.org/abstract/document/9155259},bibtex_show={true},pages={646--655},year={2020},organization={IEEE}}
Preprint
DeepFIR: Addressing the Wireless Channel Action in Physical-Layer Deep Learning
Restuccia, Francesco, D’Oro, Salvatore, Al-Shawabka, Amani, Rendon, Bruno Costa, Ioannidis, Stratis, and Melodia, Tommaso
Deep learning can be used to classify waveform characteristics (e.g., modulation) with accuracy levels that are hardly attainable with traditional techniques. Recent research has demonstrated that one of the most crucial challenges in wireless deep learning is to counteract the channel action, which may significantly alter the waveform features. The problem is further exacerbated by the fact that deep learning algorithms are hardly re-trainable in real time due to their sheer size. This paper proposes DeepFIR, a framework to counteract the channel action in wireless deep learning algorithms without retraining the underlying deep learning model. The key intuition is that through the application of a carefully-optimized digital finite input response filter (FIR) at the transmitter’s side, we can apply tiny modifications to the waveform to strengthen its features according to the current channel conditions. We mathematically formulate the Waveform Optimization Problem (WOP) as the problem of finding the optimum FIR to be used on a waveform to improve the classifier’s accuracy. We also propose a data-driven methodology to train the FIRs directly with dataset inputs. We extensively evaluate DeepFIR on a experimental testbed of 20 software-defined radios, as well as on two datasets made up by 500 ADS-B devices and by 500 WiFi devices and a 24-class modulation dataset. Experimental results show that our approach (i) increases the accuracy of the radio fingerprinting models by about 35%, 50% and 58%; (ii) decreases an adversary’s accuracy by about 54% when trying to imitate other device’s fingerprints by using their filters; (iii) achieves 27% improvement over the state of the art on a 100-device dataset; (iv) increases by 2x the accuracy of the modulation dataset.
@article{restuccia2020deepfir,abbr={Preprint},title={DeepFIR: Addressing the Wireless Channel Action in Physical-Layer Deep Learning},author={Restuccia, Francesco and D'Oro, Salvatore and Al-Shawabka, Amani and Rendon, Bruno Costa and Ioannidis, Stratis and Melodia, Tommaso},journal={arXiv preprint arXiv:2005.04226},html={https://arxiv.org/abs/2005.04226},bibtex_show={true},year={2020}}
Preprint
Hacking the Waveform: Generalized Wireless Adversarial Deep Learning
Restuccia, Francesco, D’Oro, Salvatore, Al-Shawabka, Amani, Rendon, Bruno Costa, Chowdhury, Kaushik, Ioannidis, Stratis, and Melodia, Tommaso
This paper advances the state of the art by proposing the first comprehensive analysis and experimental evaluation of adversarial learning attacks to wireless deep learning systems. We postulate a series of adversarial attacks, and formulate a Generalized Wireless Adversarial Machine Learning Problem (GWAP) where we analyze the combined effect of the wireless channel and the adversarial waveform on the efficacy of the attacks. We propose a new neural network architecture called FIRNet, which can be trained to "hack" a classifier based only on its output. We extensively evaluate the performance on (i) a 1,000-device radio fingerprinting dataset, and (ii) a 24-class modulation dataset. Results obtained with several channel conditions show that our algorithms can decrease the classifier accuracy up to 3x. We also experimentally evaluate FIRNet on a radio testbed, and show that our data-driven blackbox approach can confuse the classifier up to 97% while keeping the waveform distortion to a minimum.
@article{restuccia2020hacking,abbr={Preprint},title={Hacking the Waveform: Generalized Wireless Adversarial Deep Learning},author={Restuccia, Francesco and D'Oro, Salvatore and Al-Shawabka, Amani and Rendon, Bruno Costa and Chowdhury, Kaushik and Ioannidis, Stratis and Melodia, Tommaso},html={https://arxiv.org/abs/2005.02270},journal={arXiv preprint arXiv:2005.02270},bibtex_show={true},year={2020}}
Preprint
My SIM is Leaking My Data: Exposing Self-Login Privacy Breaches in Smartphones
Coletta, Andrea, Maselli, Gaia, Piva, Mauro, Silvestri, Domenicomichele, and Restuccia, Francesco
In the latest years the attention on management of users’ personal data has increased
significantly to ensure security and privacy. Several new regulations and public entities are born to guarantee
and regulate protection of user data. Nevertheless, users are still exposed to a high number of issues
and leaks. In this paper we expose a new security leak for smartphone users, which allows to stole user
personal data by accessing the mobile operator user page when auto-login is employed. We show how any
"apparently" genuine app can steal these data from some mobile operators; or how an attacker can stole
them by exploiting a shared internet connection, e.g., through hot-spot granted by the user. We analyse
different mobile operator companies to demonstrate the highlighted issues, and we discover that more than
40 millions of mobile smartphones are vulnerable. Finally, we propose some possible countermeasures.
@article{coletta2020my,abbr={Preprint},title={My SIM is Leaking My Data: Exposing Self-Login Privacy Breaches in Smartphones},author={Coletta, Andrea and Maselli, Gaia and Piva, Mauro and Silvestri, Domenicomichele and Restuccia, Francesco},journal={arXiv preprint arXiv:2003.08458},html={https://arxiv.org/abs/2003.08458},bibtex_show={true},year={2020}}
2019
Journal
No Radio Left Behind: Radio Fingerprinting Through Deep Learning of Physical-Layer Hardware Impairments
Due to the unprecedented scale of the Internet
of Things, designing scalable, accurate, energy-efficient and
tamper-proof authentication mechanisms has now become more
important than ever. To this end, in this paper we present
ORACLE, a novel system based on convolutional neural networks
(CNNs) to “fingerprint” (i.e., identify) a unique radio from a
large pool of devices by deep-learning the fine-grained hardware
impairments imposed by radio circuitry on physical-layer I/Q
samples. First, we show how hardware-specific imperfections are
learned by the CNN framework. Then, we extensively evaluate the
performance of ORACLE on several first-of-its-kind large-scale
datasets of WiFi-transmissions collected “in the wild”, as well
as a dataset of nominally-identical (i.e., equal baseband signals)
WiFi devices, reaching 80-90% accuracy is many cases with the
error gap arising due to channel-induced effects. Finally, we show
through an experimental testbed, how this accuracy can reach
over 99% by intentionally inserting and learning the effect of con-
trolled impairments at the transmitter side, to completely remove
the impact of the wireless channel. Furthermore, to scale this
approach for classifying potential thousands of radios, we pro-
pose an impairment hopping spread spectrum (IHOP) technique
that is resilient to spoofing attacks.
@article{sankhe2019no,abbr={Journal},title={No Radio Left Behind: Radio Fingerprinting Through Deep Learning of Physical-Layer Hardware Impairments},author={Sankhe, Kunal and Belgiovine, Mauro and Zhou, Fan and Angioloni, Luca and Restuccia, Frank and D’Oro, Salvatore and Melodia, Tommaso and Ioannidis, Stratis and Chowdhury, Kaushik},journal={IEEE Transactions on Cognitive Communications and Networking},bibtex_show={true},volume={6},number={1},pages={165--178},year={2019},html={https://ieeexplore.ieee.org/abstract/document/8882379},publisher={IEEE}}
Journal
Toward Operator-to-Waveform 5G Radio Access Network Slicing
D’Oro, Salvatore,
Restuccia, Francesco, and Melodia, Tommaso
RAN slicing refers to a vision where multiple MNOs are assigned virtual networks (i.e., slices) instantiated on top of the same physical network resources. Existing work in this area has addressed RAN slicing at different levels of network abstraction, but has often neglected the multitude of tightly intertwined inter-level operations involved in practical slicing systems. For this reason, this article discusses a novel framework for operator-to-waveform 5G RAN slicing. In the proposed framework, slicing operations are treated holistically, including MNOs’ selection of base stations and maximum number of users, down to the waveform-level scheduling of resource blocks. Simulation results show that the proposed framework generates RAN slices where 95 percent of allocated resources can be used to perform coordination- based 5G transmission technologies, and facilitates the coexistence of multiple RAN slices providing up to 120 percent improvement in terms of SINR experienced by mobile users.
@article{d2020toward,abbr={Journal},html={https://ieeexplore.ieee.org/abstract/document/9071984},title={Toward Operator-to-Waveform 5G Radio Access Network Slicing},author={D'Oro, Salvatore and Restuccia, Francesco and Melodia, Tommaso},journal={IEEE Communications Magazine},bibtex_show={true},volume={58},number={4},pages={18--23},year={2019},publisher={IEEE}}
Journal
Machine Learning for Wireless Communications in the Internet of Things: A Comprehensive Survey
The Internet of Things (IoT) is expected to require more effective and efficient wireless communications
than ever before. For this reason, techniques such as spectrum sharing, dynamic spectrum access, extrac-
tion of signal intelligence and optimized routing will soon become essential components of the IoT wire-
less communication paradigm. In this vision, IoT devices must be able to not only learn to autonomously
extract spectrum knowledge on-the-fly from the network but also leverage such knowledge to dynami-
cally change appropriate wireless parameters (e.g., frequency band, symbol modulation, coding rate, route
selection, etc.) to reach the network’s optimal operating point. Given that the majority of the IoT will be
composed of tiny, mobile, and energy-constrained devices, traditional techniques based on a priori net-
work optimization may not be suitable, since (i) an accurate model of the environment may not be readily
available in practical scenarios; (ii) the computational requirements of traditional optimization techniques
may prove unbearable for IoT devices. To address the above challenges, much research has been devoted
to exploring the use of machine learning to address problems in the IoT wireless communications do-
main. The reason behind machine learning’s popularity is that it provides a general framework to solve
very complex problems where a model of the phenomenon being learned is too complex to derive or too
dynamic to be summarized in mathematical terms.
This work provides a comprehensive survey of the state of the art in the application of machine learn-
ing techniques to address key problems in IoT wireless communications with an emphasis on its ad hoc
networking aspect. First, we present extensive background notions of machine learning techniques. Then,
by adopting a bottom-up approach, we examine existing work on machine learning for the IoT at the
physical, data-link and network layer of the protocol stack. Thereafter, we discuss directions taken by the
community towards hardware implementation to ensure the feasibility of these techniques. Additionally,
before concluding, we also provide a brief discussion of the application of machine learning in IoT beyond
wireless communication. Finally, each of these discussions is accompanied by a detailed analysis of the
related open problems and challenges.
@article{jagannath2019machine,abbr={Journal},html={https://www.sciencedirect.com/science/article/pii/S1570870519300812},title={Machine Learning for Wireless Communications in the Internet of Things: A Comprehensive Survey},author={Jagannath, Jithin and Polosky, Nicholas and Jagannath, Anu and Restuccia, Francesco and Melodia, Tommaso},journal={Ad Hoc Networks},bibtex_show={true},volume={93},pages={101913},year={2019},publisher={Elsevier}}
Chapter
The Role of Machine Learning and Radio Reconfigurability in the Quest for Wireless Security
Restuccia, Francesco, D’Oro, Salvatore, Zhang, Liyang, and Melodia, Tommaso
Wireless networks require fast-acting, effective and efficient security mechanisms able to tackle unpredictable, dynamic, and stealthy attacks. In recent years, we have seen the steadfast rise of technologies based on machine learning and software-defined radios, which provide the necessary tools to address existing and future security threats without the need of direct human-in-the-loop intervention. On the other hand, these techniques have been so far used in an ad hoc fashion, without any tight interaction between the attack detection and mitigation phases. In this chapter, we propose and discuss a Learning-based Wireless Security (LeWiS) framework that provides a closed-loop approach to the problem of cross-layer wireless security. Along with discussing the LeWiS framework, we also survey recent advances in cross-layer wireless security.
@incollection{restuccia2019role,abbr={Chapter},title={The Role of Machine Learning and Radio Reconfigurability in the Quest for Wireless Security},author={Restuccia, Francesco and D’Oro, Salvatore and Zhang, Liyang and Melodia, Tommaso},booktitle={Proactive and Dynamic Network Defense},pages={191--221},bibtex_show={true},html={https://link.springer.com/chapter/10.1007/978-3-030-10597-6_8},year={2019},publisher={Springer}}
Conference
Impairment Shift Keying: Covert Signaling by Deep Learning of Controlled Radio Imperfections
The broadcast nature of the wireless spectrum necessarily implies the possibility of eavesdropping, as well as malicious modification of waveforms through inexpensive, widely available software-defined radios (SDRs). This paper proposes a method for covert wireless communications that can be used to authenticate a device or exchange private information between devices. Our approach, called Impairment Shift Keying (ISK), introduces small yet controlled modifications to the radio transmitter hardware, which distorts regular standards-compliant waveforms, such as WiFi, with only 1% increase in bit error rate. A deep convolutional neural network (CNN) is trained to learn these overlay signal variations, which serves as a low-overhead classifier returning a binary 0 or 1 per detected impairment pattern. By mapping device-specific injected impairment patterns to signal variations, ISK validates device IDs with only few inphase (I) and quadrature (Q) samples. Furthermore, through an experimental testbed, ISK is shown to be resilient to channel and SNR level variations, allowing a throughput of 93-1500 Kbps on the covert channel that is undetected by other receivers.
@inproceedings{sankhe2019impairment,abbr={Conference},title={Impairment Shift Keying: Covert Signaling by Deep Learning of Controlled Radio Imperfections},author={Sankhe, Kunal and Restuccia, Francesco and D'Oro, Salvatore and Jian, Tong and Wang, Zifeng and Al-Shawabka, Amani and Dy, Jennifer and Melodia, Tommaso and Ioannidis, Stratis and Chowdhury, Kaushik},booktitle={MILCOM 2019-2019 IEEE Military Communications Conference (MILCOM)},pages={598--603},bibtex_show={true},year={2019},html={https://ieeexplore.ieee.org/abstract/document/9021079},organization={IEEE}}
Conference
Software-Defined Radios to Accelerate mmWave Wireless Innovation
@inproceedings{zheng2019software,abbr={Conference},anstract={Imagine a dystopian world in which pianists did not have access to pianos. They would dream up sheet music, and have a computer simulate a recital. They would then go to their premier conference PianoCom to present a paper on their novel sheet music; and perhaps even spend the evening arguing about whose sheet music was more pleasing to the ears.},title={Software-Defined Radios to Accelerate mmWave Wireless Innovation},author={Zheng, Kai and Dhananjay, Aditya and Mezzavilla, Marco and Madanayake, Arjuna and Bharadwaj, Shubhendu and Ariyarathna, Viduneth and Gosain, Abhimanyu and Melodia, Tommaso and Restuccia, Francesco and Jornet, Josep and others},booktitle={2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)},bibtex_show={true},pages={1--4},html={https://ieeexplore.ieee.org/abstract/document/8935877},year={2019},organization={IEEE}}
Conference
U-Verse: A Mniaturized Patform for End-to-End Closed-Loop Implantable Internet of Medical Things Systems
The promise of real-time detection and response to life-crippling diseases brought by the Implantable Internet of Medical Things (IIoMT) has recently spurred substantial advances in implantable technologies. Yet, existing devices do not provide at once the miniaturized end-to-end sensing-computation-communication-recharging capabilities to implement IIoMT applications. This paper fills the existing research gap by presenting U-Verse, the first FDA-compliant rechargeable IIoMT platform packing sensing, computation, communication, and recharging circuits into a penny-scale platform. U-Verse uses a single miniaturized transducer for data exchange and for wireless charging. To predict U-Verse’s performance, we (i) derive and experimentally validate a mathematical model of U-Verse’s charging efficiency; and (ii) experimentally calculate the resistance-reactance parameters of our ultrasonic transducer and rectifying circuit. We design a matching circuit to maximize the amount of power transferred from the outside. We also go through the challenge of fabricating a full-fledged cm-scale printed circuit board (PCB) for U-Verse. Extensive experimental evaluation indicates that U-Verse (i) is able to recharge a 330mF and 15F energy storage unit - several orders of magnitude higher than existing work - respectively under 20 and 60 minutes at a depth of 5cm; (ii) achieves stored charge duration of up to 610 and 40 hours in case of battery and supercapacitor energy storage, respectively. Finally, U-Verse is demonstrated through (i) a closed-loop application where a periodic sensing/actuation task sends data via ultrasounds through real porcine meat; and (ii) a real-time reconfigurable pacemaker.
@inproceedings{guida2019u,abbr={Conference},html={https://dl.acm.org/doi/abs/10.1145/3356250.3360026},title={U-Verse: A Mniaturized Patform for End-to-End Closed-Loop Implantable Internet of Medical Things Systems},author={Guida, Raffaele and Dave, Neil and Restuccia, Francesco and Demirors, Emrecan and Melodia, Tommaso},booktitle={Proceedings of the 17th Conference on Embedded Networked Sensor Systems},bibtex_show={true},pages={311--323},year={2019}}
Conference
MillimeTera: Toward a Large-Scale Open-Source mmWave and Terahertz Experimental Testbed
The promise of widespread 5th generation (5G) and beyond wireless systems can only be fulfilled through extensive experimental campaigns aimed at validating the large body of theoretical findings on millimeter wave (mmWave) and Terahertz (THz) frequencies. However, experimental research efforts in this field are often stymied by the lack of open hardware, open-source software, and affordable testbeds accessible by the research community at large, who is now forced to perform simulation-based research or - if at all possible - small-scale, ad hoc experiments. After discussing existing research challenges in mmWave and THz testbeds, in this paper we propose MillimeTera, a vision for a new gen- eration of disruptive experimental platforms that will radically transform the status quo in mmWave and THz research. We next discuss our preliminary hardware and software efforts, and finally provide a roadmap of our main design and development goals in the years to come.
@inproceedings{polese2019millimetera,abbr={Conference},html={https://dl.acm.org/doi/abs/10.1145/3349624.3356764},title={MillimeTera: Toward a Large-Scale Open-Source mmWave and Terahertz Experimental Testbed},author={Polese, Michele and Restuccia, Francesco and Gosain, Abhimanyu and Jornet, Josep and Bhardwaj, Shubhendu and Ariyarathna, Viduneth and Mandal, Soumyajit and Zheng, Kai and Dhananjay, Aditya and Mezzavilla, Marco and others},booktitle={Proceedings of the 3rd ACM Workshop on Millimeter-wave Networks and Sensing Systems},bibtex_show={true},pages={27--32},year={2019}}
Conference
Hiding Data in Plain Sight: Undetectable Wireless Communications Through Pseudo-Noise Asymmetric Shift Keying
D’Oro, Salvatore,
Restuccia, Francesco, and Melodia, Tommaso
In IEEE INFOCOM 2019-IEEE Conference on Computer Communications 2019
Undetectable wireless transmissions are fundamen-
tal to avoid eavesdroppers or censorship by authoritarian gov-
ernments. To address this issue, wireless steganography “hides”
covert information inside primary information by slightly mod-
ifying the transmitted waveform such that primary information
will still be decodable, while covert information will be seen as
noise by agnostic receivers. Since the addition of covert informa-
tion inevitably decreases the SNR of the primary transmission,
a key challenge in wireless steganography is to mathematically
analyze and optimize the impact of the covert channel on the
primary channel as a function of different channel conditions.
Another core issue is to make sure that the covert channel is
almost undetectable by eavesdroppers. Existing approaches are
protocol-specific and thus their performance cannot be assessed
and optimized in general scenarios. To address this research gap,
we notice that existing wireless technologies rely on phase-keying
modulations (e.g., BPSK, QPSK) that in most cases do not use
the channel up to its Shannon capacity. Therefore, the residual
capacity can be leveraged to implement a wireless system based
on a pseudo-noise asymmetric shift keying (PN-ASK) modulation,
where covert symbols are mapped by shifting the amplitude of pri-
mary symbols. This way, covert information will be undetectable,
since a receiver expecting phase-modulated symbols will see their
shift in amplitude as an effect of channel/path loss degradation.
Through rigorous mathematical analysis, we first investigate the
SER of PN-ASK as a function of the channel; then, we find the
optimal PN-ASK parameters that optimize primary and covert
throughput under different channel condition. We evaluate the
throughput performance and undetectability of PN-ASK through
extensive simulations and on an experimental testbed based on
USRP N210 software-defined radios. Results indicate that PN-
ASK improves the throughput by more than 8x with respect to
prior art. Finally, we demonstrate through experiments that PN-
ASK is able to transmit covert data on top of IEEE 802.11g frames,
which are correctly decoded by an off-the-shelf laptop WiFi card
without any hardware modifications.
@inproceedings{d2019hiding,abbr={Conference},title={Hiding Data in Plain Sight: Undetectable Wireless Communications Through Pseudo-Noise Asymmetric Shift Keying},author={D’Oro, Salvatore and Restuccia, Francesco and Melodia, Tommaso},booktitle={IEEE INFOCOM 2019-IEEE Conference on Computer Communications},bibtex_show={true},pages={1585--1593},year={2019},html={https://ieeexplore.ieee.org/abstract/document/8737581},organization={IEEE}}
Conference
Jam Sessions: Analysis and Experimental Evaluation of Advanced Jamming Attacks in MIMO Networks
Zhang, Liyang,
Restuccia, Francesco, Melodia, Tommaso, and Pudlewski, Scott M
In Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing 2019
Recent research advances in wireless security have shown that advanced jamming can significantly decrease the performance of wireless communications. In advanced jamming, the adversary intentionally concentrates the available energy budget on specific critical components (e.g., pilot symbols, acknowledgement packets, etc.) to (i) increase the jamming effectiveness, as more targets can be jammed with the same energy budget; and (ii) decrease the likelihood of being detected, as the channel is jammed for a shorter period of time. These key aspects make advanced jamming very stealthy yet exceptionally effective in practical scenarios. One of the fundamental challenges in designing defense mechanisms against an advanced jammer is understanding which jamming strategies yields the lowest throughput, for a given channel condition and a given amount of energy. To the best of our knowledge, this problem still remains unsolved, as an analytic model to quantitatively compare advanced jamming schemes is still missing in existing literature. To fill this gap, in this paper we conduct a comparative analysis of several most viable advanced jamming schemes in the widely-used MIMO networks. We first mathematically model a number of advanced jamming schemes at the signal processing level, so that a quantitative relationship between the jamming energy and the jamming effect is established. Based on the model, theorems are derived on the optimal advanced jamming scheme for an arbitrary channel condition. The theoretical findings are validated through extensive simulations and experiments on a 5-radio 2x2 MIMO testbed. Our results show that the theorems are able to predict jamming efficiency with high accuracy. Moreover, to further demonstrate that the theoretical findings are applicable to address crucial real-world jamming problems, we show that the theorems can be incorporated to state-of-art reinforcement-learning based jamming algorithms and boost the action exploration phase so that a faster convergence is achieved.
@inproceedings{zhang2019jam,abbr={Conference},html={https://dl.acm.org/doi/abs/10.1145/3323679.3326504},title={Jam Sessions: Analysis and Experimental Evaluation of Advanced Jamming Attacks in MIMO Networks},author={Zhang, Liyang and Restuccia, Francesco and Melodia, Tommaso and Pudlewski, Scott M},bibtex_show={true},booktitle={Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing},pages={61--70},year={2019}}
Conference
DeepRadioID: Real-Time Channel-Resilient Optimization of Deep Learning-Based Radio Fingerprinting Algorithms
Radio fingerprinting provides a reliable and energy-efficient IoT authentication strategy by leveraging the unique hardware-level imperfections imposed on the received wireless signal by the transmitter’s radio circuitry. Most of existing approaches utilize hand-tailored protocol-specific feature extraction techniques, which can identify devices operating under a pre-defined wireless protocol only. Conversely, by mapping inputs onto a very large feature space, deep learning algorithms can be trained to fingerprint large populations of devices operating under any wireless standard.
One of the most crucial challenges in radio fingerprinting is to counteract the action of the wireless channel, which decreases fingerprinting accuracy significantly by disrupting hardware impairments. On the other hand, due to their sheer size, deep learning algorithms are hardly re-trainable in real-time. Another aspect that is yet to be investigated is whether an adversary can successfully impersonate another device’s fingerprint. To address these key issues, this paper proposes DeepRadioID, a system to optimize the accuracy of deep-learning-based radio fingerprinting algorithms without retraining the underlying deep learning model. The key intuition is that through the application of a carefully-optimized digital finite input response filter (FIR) at the transmitter’s side, we can apply tiny modifications to the waveform to strengthen its fingerprint according to the current channel conditions. We mathematically formulate the Waveform Optimization Problem (WOP) as the problem of finding, for a given trained neural network, the optimum FIR to be used by the transmitter to improve its fingerprinting accuracy.
We extensively evaluate DeepRadioID on a experimental testbed of 20 nominally-identical software-defined radios, as well as on two datasets made up by 500 ADS-B devices and by 500 WiFi devices provided by the DARPA RFMLS program. Experimental results show that DeepRadioID (i) increases fingerprinting accuracy by about 35%, 50% and 58% on the three scenarios considered; (ii) decreases an adversary’s accuracy by about 54% when trying to imitate other device’s fingerprints by using their filters; (iii) achieves 27% improvement over the state of the art on a 100-device dataset.
@inproceedings{restuccia2019deepradioid,abbr={Conference},html={https://dl.acm.org/doi/abs/10.1145/3323679.3326503},bibtex_show={true},title={DeepRadioID: Real-Time Channel-Resilient Optimization of Deep Learning-Based Radio Fingerprinting Algorithms},author={Restuccia, Francesco and D'Oro, Salvatore and Al-Shawabka, Amani and Belgiovine, Mauro and Angioloni, Luca and Ioannidis, Stratis and Chowdhury, Kaushik and Melodia, Tommaso},booktitle={Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing},pages={51--60},year={2019}}
Conference
Big Data Goes Small: Real-Time Spectrum-Driven Embedded Wireless Networking Through Deep Learning in the RF Loop
Restuccia, Francesco, and Melodia, Tommaso
In IEEE INFOCOM 2019-IEEE Conference on Computer Communications 2019
The explosion of 5G networks and the Internet of
Things will result in an exceptionally crowded RF environment,
where techniques such as spectrum sharing and dynamic spec-
trum access will become essential components of the wireless
communication process. In this vision, wireless devices must be
able to (i) learn to autonomously extract knowledge from the
spectrum on-the-fly; and (ii) react in real time to the inferred
spectrum knowledge by appropriately changing communica-
tion parameters, including frequency band, symbol modulation,
coding rate, among others. Traditional CPU-based machine
learning suffers from high latency, and requires application-
specific and computationally-intensive feature extraction/selection
algorithms. Conversely, deep learning allows the analysis of
massive amount of unprocessed spectrum data without ad-hoc
feature extraction. So far, deep learning has been used for offline
wireless spectrum analysis only. Therefore, additional research
is needed to design systems that bring deep learning algorithms
directly on the device’s hardware and tightly intertwined with
the RF components to enable real-time spectrum-driven decision-
making at the physical layer. In this paper, we present RFLearn,
the first system enabling spectrum knowledge extraction from
unprocessed I/Q samples by deep learning directly in the RF loop.
RFLearn provides (i) a complete hardware/software architecture
where the CPU, radio transceiver and learning/actuation circuits
are tightly connected for maximum performance; and (ii) a
learning circuit design framework where the latency vs. hardware
resource consumption trade-off can explored. We implement
and evaluate the performance of RFLearn on custom software-
defined radio built on a system-on-chip (SoC) ZYNQ-7000 device
mounting AD9361 radio transceivers and VERT2450 antennas.
We showcase the capabilities of RFLearn by applying it to
solving the fundamental problems of modulation and OFDM
parameter recognition. Experimental results reveal that RFLearn
decreases latency and power by about 17x and 15x with respect
to a software-based solution, with a comparatively low hardware
resource consumption.
@inproceedings{restuccia2019big,abbr={Conference},title={Big Data Goes Small: Real-Time Spectrum-Driven Embedded Wireless Networking Through Deep Learning in the RF Loop},author={Restuccia, Francesco and Melodia, Tommaso},booktitle={IEEE INFOCOM 2019-IEEE Conference on Computer Communications},pages={2152--2160},bibtex_show={true},year={2019},html={https://ieeexplore.ieee.org/abstract/document/8737459},organization={IEEE}}
Conference
The Slice is Served: Enforcing Radio Access Network Slicing in Virtualized 5G Systems
D’Oro, Salvatore,
Restuccia, Francesco, Talamonti, Alessandro, and Melodia, Tommaso
In IEEE INFOCOM 2019-IEEE Conference on Computer Communications 2019
The notions of softwarization and virtualization of
the radio access network (RAN) of next-generation (5G) wireless
systems are ushering in a vision where applications and services
are physically decoupled from devices and network infrastructure.
This crucial aspect will ultimately enable the dynamic deployment
of heterogeneous services by different network operators over the
same physical infrastructure. RAN slicing is a form of 5G virtual-
ization that allows network infrastructure owners to dynamically
“slice” and “serve” their network resources (i.e., spectrum, power,
antennas, among others) to different mobile virtual network oper-
ators (MVNOs), according to their current needs. Once the slicing
policy (i.e., the percentage of resources assigned to each MVNO)
has been computed, a major challenge is how to allocate spectrum
resources to MVNOs in such a way that (i) the slicing policy
defined by the network owner is enforced; and (ii) the interference
among different MVNOs is minimized. In this article, we mathe-
matically formalize the RAN slicing enforcement problem (RSEP)
and demonstrate its NP-hardness. For this reason, we design three
approximation algorithms that render the solution scalable as the
RSEP increases in size. We extensively evaluate their performance
through simulations and experiments on a testbed made up of
8 software-defined radio peripherals. Experimental results reveal
that not only do our algorithms enforce the slicing policies, but
can also double the total network throughput when intra-MVNO
power control policies are used in conjunction
@inproceedings{d2019slice,abbr={Conference},title={The Slice is Served: Enforcing Radio Access Network Slicing in Virtualized 5G Systems},author={D’Oro, Salvatore and Restuccia, Francesco and Talamonti, Alessandro and Melodia, Tommaso},booktitle={IEEE INFOCOM 2019-IEEE Conference on Computer Communications},bibtex_show={true},pages={442--450},html={https://ieeexplore.ieee.org/abstract/document/8737481},year={2019},organization={IEEE}}
2018
Journal
Incentme: Effective Mechanism Design to Stimulate Crowdsensing Participants with Uncertain Mobility
Restuccia, Francesco, Ferraro, Pierluca, Silvestri, Simone, Das, Sajal K, and Re, Giuseppe Lo
Mobile crowdsensing harnesses the sensing power of modern smartphones to collect and analyze data beyond the scale of what was previously possible with traditional sensor networks. Given the participatory nature of mobile crowdsensing, it is imperative to incentivize mobile users to provide sensing services in a timely and reliable manner. Most importantly, given sensed information is often valid for a limited period of time, the capability of smartphone users to execute sensing tasks largely depends on their mobility pattern, which is often uncertain. For this reason, in this paper, we propose IncentMe, a framework that solves this core issue by leveraging game-theoretical reverse auction mechanism design. After demonstrating that the proposed problem is NP-hard, we derive two mechanisms that are parallelizable and achieve higher approximation ratio than existing work. IncentMe has been extensively evaluated on a road traffic monitoring application implemented using mobility traces of taxi cabs in San Francisco, Rome, and Beijing. Results demonstrate that the mechanisms in IncentMe outperform the state of the art work by improving the efficiency in recruiting participants by 30 percent.
@article{restuccia2018incentme,abbr={Journal},title={Incentme: Effective Mechanism Design to Stimulate Crowdsensing Participants with Uncertain Mobility},author={Restuccia, Francesco and Ferraro, Pierluca and Silvestri, Simone and Das, Sajal K and Re, Giuseppe Lo},journal={IEEE Transactions on Mobile Computing},bibtex_show={true},volume={18},number={7},html={https://ieeexplore.ieee.org/abstract/document/8425784},pages={1571--1584},year={2018},publisher={IEEE}}
Journal
FIRST: A Framework for Optimizing Information Quality in Mobile Crowdsensing Systems
Restuccia, Francesco, Ferraro, Pierluca, Sanders, Timothy S, Silvestri, Simone, Das, Sajal K, and Re, Giuseppe Lo
Thanks to the collective action of participating smartphone users, mobile crowdsensing allows data collection at a scale and pace that was once impossible. The biggest challenge to overcome in mobile crowdsensing is that participants may exhibit malicious or unreliable behavior, thus compromising the accuracy of the data collection process. Therefore, it becomes imperative to design algorithms to accurately classify between reliable and unreliable sensing reports. To address this crucial issue, we propose a novel Framework for optimizing Information Reliability in Smartphone-based participaTory sensing (FIRST) that leverages mobile trusted participants (MTPs) to securely assess the reliability of sensing reports. FIRST models and solves the challenging problem of determining before deployment the minimum number of MTPs to be used to achieve desired classification accuracy. After a rigorous mathematical study of its performance, we extensively evaluate FIRST through an implementation in iOS and Android of a room occupancy monitoring system and through simulations with real-world mobility traces. Experimental results demonstrate that FIRST reduces significantly the impact of three security attacks (i.e., corruption, on/off, and collusion) by achieving a classification accuracy of almost 80% in the considered scenarios. Finally, we discuss our ongoing research efforts to test the performance of FIRST as part of the National Map Corps project.
@article{restuccia2018first,abbr={Journal},title={FIRST: A Framework for Optimizing Information Quality in Mobile Crowdsensing Systems},author={Restuccia, Francesco and Ferraro, Pierluca and Sanders, Timothy S and Silvestri, Simone and Das, Sajal K and Re, Giuseppe Lo},journal={ACM Transactions on Sensor Networks (TOSN)},bibtex_show={true},volume={15},number={1},pages={1--35},html={https://dl.acm.org/doi/abs/10.1145/3267105},year={2018},publisher={ACM New York, NY, USA}}
Journal
Low-Complexity Distributed Radio Access Network Slicing: Algorithms and Experimental Results
D’Oro, Salvatore,
Restuccia, Francesco, Melodia, Tommaso, and Palazzo, Sergio
Radio access network (RAN) slicing is an effective methodology to dynamically allocate networking resources in 5G networks. One of the main challenges of RAN slicing is that it is provably an NP-Hard problem. For this reason, we design near-optimal low-complexity distributed RAN slicing algorithms. First, we model the slicing problem as a congestion game, and demonstrate that such game admits a unique Nash equilibrium (NE). Then, we evaluate the Price of Anarchy (PoA) of the NE, i.e., the efficiency of the NE as compared with the social optimum, and demonstrate that the PoA is upper-bounded by 3/2. Next, we propose two fully-distributed algorithms that provably converge to the unique NE without revealing privacy-sensitive parameters from the slice tenants. Moreover, we introduce an adaptive pricing mechanism of the wireless resources to improve the network owner’s profit. We evaluate the performance of our algorithms through simulations and an experimental testbed deployed on the Amazon EC2 cloud, both based on a real-world dataset of base stations from the OpenCellID project. Results conclude that our algorithms converge to the NE rapidly and achieve near-optimal performance, while our pricing mechanism effectively improves the profit of the network owner.
@article{d2018low,abbr={Journal},title={Low-Complexity Distributed Radio Access Network Slicing: Algorithms and Experimental Results},author={D’Oro, Salvatore and Restuccia, Francesco and Melodia, Tommaso and Palazzo, Sergio},journal={IEEE/ACM Transactions on Networking},bibtex_show={true},volume={26},number={6},pages={2815--2828},year={2018},html={https://ieeexplore.ieee.org/abstract/document/8532127},publisher={IEEE}}
Journal
Securing the Internet of Things in the Age of Machine Learning and Software-Defined Networking
Restuccia, Francesco, D’Oro, Salvatore, and Melodia, Tommaso
The Internet of Things (IoT) realizes a vision where billions of interconnected devices are deployed just about everywhere, from inside our bodies to the most remote areas of the globe. As the IoT will soon pervade every aspect of our lives and will be accessible from anywhere, addressing critical IoT security threats is now more important than ever. Traditional approaches where security is applied as an afterthought and as a “patch” against known attacks are insufficient. Indeed, next-generation IoT challenges will require a new secure-by-design vision, where threats are addressed proactively and IoT devices learn to dynamically adapt to different threats. To this end, machine learning (ML) and software-defined networking (SDN) will be key to provide both reconfigurability and intelligence to the IoT devices. In this paper, we first provide a taxonomy and survey the state of the art in IoT security research, and offer a roadmap of concrete research challenges related to the application of ML and SDN to address existing and next-generation IoT security threats.
@article{restuccia2018securing,abbr={Journal},title={Securing the Internet of Things in the Age of Machine Learning and Software-Defined Networking},author={Restuccia, Francesco and D’Oro, Salvatore and Melodia, Tommaso},journal={IEEE Internet of Things Journal},bibtex_show={true},volume={5},number={6},pages={4829--4842},year={2018},html={https://ieeexplore.ieee.org/abstract/document/8377989},publisher={IEEE}}
Journal
Taming Cross-Layer Attacks in Wireless Networks: A Bayesian Learning Approach
Zhang, Liyang,
Restuccia, Francesco, Melodia, Tommaso, and Pudlewski, Scott M
Wireless networks are extremely vulnerable to a plethora of security threats, including eavesdropping, jamming, and spoofing, to name a few. Recently, a number of next-generation cross-layer attacks have been unveiled, which leverage small changes on one network layer to stealthily and significantly compromise another target layer. Since cross-layer attacks are stealthy, dynamic, and unpredictable in nature, novel security techniques are needed. Since models of the environment and attacker’s behavior may be hard to obtain in practical scenarios, machine learning techniques become the ideal choice to tackle cross-layer attacks. In this paper, we propose FORMAT, a novel framework to tackle cross-layer security attacks in wireless networks. FORMAT is based on Bayesian learning and made up by a detection and a mitigation component. On one hand, the attack detection component constructs a model of observed evidence to identify stealthy attack activities. On the other hand, the mitigation component uses optimization theory to achieve the desired trade-off between security and performance. The proposed FORMAT framework has been extensively evaluated and compared with existing work by simulations and experiments obtained with a real-world testbed made up by Ettus Universal Software Radio Peripheral (USRP) radios. Results demonstrate the effectiveness of the proposed methodology as FORMAT is able to effectively detect and mitigate the considered cross-layer attacks.
@article{zhang2018taming,abbr={Journal},title={Taming Cross-Layer Attacks in Wireless Networks: A Bayesian Learning Approach},author={Zhang, Liyang and Restuccia, Francesco and Melodia, Tommaso and Pudlewski, Scott M},journal={IEEE Transactions on Mobile Computing},bibtex_show={true},volume={18},number={7},pages={1688--1702},year={2018},html={https://ieeexplore.ieee.org/abstract/document/8428428},publisher={IEEE}}
Conference
Practical Location Validation in Participatory Sensing Through Mobile Wifi Hotspots
Saracino, Andrea,
Restuccia, Francesco, and Martinelli, Fabio
In 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE) 2018
The reliability of information in participatory
sensing (PS) systems largely depends on the accuracy of
the location of the participating users. However, existing PS
applications are not able to efficiently validate the position of
users in large-scale outdoor environments. In this paper, we
present an efficient and scalable Location Validation System
(LVS) to secure PS systems from location-spoofing attacks.
In particular, the user location is verified with the help of
mobile WiFi hot spots (MHSs), which are users activating
the WiFi hotspot capability of their smartphones and accept-
ing connections from nearby users, thereby validating their
position inside the sensing area. The system also comprises
a novel verification technique called Chains of Sight, which
tackles collusion-based attacks effectively. LVS also includes
a reputation-based algorithm that rules out sensing reports
of location-spoofing users. The feasibility and efficiency of
the WiFi-based approach of LVS is demonstrated by a set
of indoor and outdoor experiments conducted using off-
the-shelf smartphones, while the energy-efficiency of LVS is
demonstrated by experiments using the Power Monitor energy
tool. Finally, the security properties of LVS are analyzed by
simulation experiments. Results indicate that the proposed
LVS system is energy-efficient, applicable to most of the
practical PS scenarios, and efficiently secures existing PS
systems from location-spoofing attacks.
@inproceedings{saracino2018practical,abbr={Conference},title={Practical Location Validation in Participatory Sensing Through Mobile Wifi Hotspots},author={Saracino, Andrea and Restuccia, Francesco and Martinelli, Fabio},booktitle={2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE)},pages={596--607},bibtex_show={true},html={https://ieeexplore.ieee.org/abstract/document/8455958},year={2018},organization={IEEE}}
Preprint
Blockchain for the Internet of Things: Present and Future
Restuccia, Francesco, Kanhere, Salvatore D, Melodia, Tommaso, and Das, Sajal K
One of the key challenges to the IoT’s success is how to secure and anonymize billions of IoT transactions and devices per day, an issue that still lingers despite significant research efforts over the last few years. On the other hand, technologies based on blockchain algorithms are disrupting today’s cryptocurrency markets and showing tremendous potential, since they provide a distributed transaction ledger that cannot be tampered with or controlled by a single entity. Although the blockchain may present itself as a cure-all for the IoT’s security and privacy challenges, significant research efforts still need to be put forth to adapt the computation-intensive blockchain algorithms to the stringent energy and processing constraints of today’s IoT devices. In this paper, we provide an overview of existing literature on the topic of blockchain for IoT, and present a roadmap of research challenges that will need to be addressed to enable the usage of blockchain technologies in the IoT.
@article{restuccia2019blockchain,abbr={Preprint},html={https://arxiv.org/abs/1903.07448},bibtex_show={true},title={Blockchain for the Internet of Things: Present and Future},author={Restuccia, Francesco and Kanhere, Salvatore D and Melodia, Tommaso and Das, Sajal K},journal={arXiv preprint arXiv:1903.07448},year={2018}}
2017
Journal
Quality of Information in Mobile Crowdsensing: Survey and Research Challenges
Smartphones have become the most pervasive devices in people’s lives and are clearly transforming the way we live and perceive technology. Today’s smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as an accelerometer, a gyroscope, a microphone, and a camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this article, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing and analyze in depth the current state of the art on the topic. We also outline novel research challenges, along with possible directions of future work.
@article{restuccia2017quality,abbr={Journal},title={Quality of Information in Mobile Crowdsensing: Survey and Research Challenges},author={Restuccia, Francesco and Ghosh, Nirnay and Bhattacharjee, Shameek and Das, Sajal K and Melodia, Tommaso},journal={ACM Transactions on Sensor Networks (TOSN)},bibtex_show={true},volume={13},number={4},pages={1--43},html={https://dl.acm.org/doi/abs/10.1145/3139256},year={2017},publisher={ACM New York, NY, USA}}
Conference
ISonar: Software-Defined Underwater Acoustic Networking for Amphibious Smartphones
Restuccia, Francesco, Demirors, Emrecan, and Melodia, Tommaso
In Proceedings of the International Conference on Underwater Networks & Systems 2017
Recent technological advances have brought to the end-user market smartphones that are able to remain fully-functional even when submerged under water. This capability will soon enable the commercialization of a plethora of exciting smartphone apps, including life-saving systems such as real-time monitoring of scuba divers breathing. On the other hand, it becomes paramount to empower smartphones with end-to-end underwater communication capabilities. In this paper, we propose iSonar, the first system implementing reliable software-defined acoustic networking between water-proof smartphones. Specifically, iSonar transforms off-the-shelf smartphones into ultrasonic software "radios" that implement an orthogonal frequency division multiplexing-based communication system to exchange data under water. To this end, iSonar sends and receives information through the AUX interface and by implementing a lightweight network stack entirely in software. We have implemented a fully-functional hardware/software prototype of iSonar on Android smartphones and off-the-shelf electronic equipment, and extensively evaluated its performance through several experiments in a tank testbed. Results show that iSonar is able to achieve packet error rate (PER) of 10-3, which is significant considering the low audio sampling rate and the strong multipath effect induced by the water tank environment.
@inproceedings{restuccia2017isonar,abbr={Conference},title={ISonar: Software-Defined Underwater Acoustic Networking for Amphibious Smartphones},author={Restuccia, Francesco and Demirors, Emrecan and Melodia, Tommaso},bibtex_show={true},booktitle={Proceedings of the International Conference on Underwater Networks \& Systems},pages={1--9},html={https://dl.acm.org/doi/abs/10.1145/3148675.3148710},year={2017}}
2016
Journal
Accurate and Efficient Modeling of 802.15.4 Unslotted CSMA/CA Through Event Chains Computation
De Guglielmo, Domenico,
Restuccia, Francesco, Anastasi, Giuseppe, Conti, Marco, and Das, Sajal K
Many analytical models have been proposed for evaluating the performance of event-driven 802.15.4 Wireless Sensor Networks (WSNs), in Non-Beacon Enabled (NBE) mode. However, existing models do not provide accurate analysis of large-scale WSNs, due to tractability issues and/or simplifying assumptions. In this paper, we propose a new approach called Event Chains Computation (ECC) to model the unslotted CSMA/CA algorithm used for channel access in NBE mode. ECC relies on the idea that outcomes of the CSMA/CA algorithm can be represented as chains of events that subsequently occur in the network. Although ECC can generate all the possible outcomes, it only considers chains with a probability to occur greater than a pre-defined threshold to reduce complexity. Furthermore, ECC parallelizes the computation by managing different chains through different threads. Our results show that, by an appropriate threshold selection, the time to derive performance metrics can be drastically reduced, with negligible impact on accuracy. We also show that the computation time decreases almost linearly with the number of employed threads. We validate our model through simulations and testbed experiments, and use it to investigate the impact of different parameters on the WSN performance, in terms of delivery ratio, latency, and energy consumption.
@article{de2016accurate,abbr={Journal},title={Accurate and Efficient Modeling of 802.15.4 Unslotted CSMA/CA Through Event Chains Computation},author={De Guglielmo, Domenico and Restuccia, Francesco and Anastasi, Giuseppe and Conti, Marco and Das, Sajal K},journal={IEEE Transactions on Mobile Computing},bibtex_show={true},volume={15},number={12},pages={2954--2968},year={2016},html={https://ieeexplore.ieee.org/abstract/document/7404280},publisher={IEEE}}
Journal
Incentive Mechanisms for Participatory Sensing: Survey and Research Challenges
Restuccia, Francesco, Das, Sajal K, and Payton, Jamie
Participatory sensing is a powerful paradigm that takes advantage of smartphones to collect and analyze data beyond the scale of what was previously possible. Given that participatory sensing systems rely completely on the users’ willingness to submit up-to-date and accurate information, it is paramount to effectively incentivize users’ active and reliable participation. In this article, we survey existing literature on incentive mechanisms for participatory sensing systems. In particular, we present a taxonomy of existing incentive mechanisms for participatory sensing systems, which are subsequently discussed in depth by comparing and contrasting different approaches. Finally, we discuss an agenda of open research challenges in incentivizing users in participatory sensing.
@article{restuccia2016incentive,abbr={Journal},title={Incentive Mechanisms for Participatory Sensing: Survey and Research Challenges},author={Restuccia, Francesco and Das, Sajal K and Payton, Jamie},journal={ACM Transactions on Sensor Networks (TOSN)},bibtex_show={true},volume={12},number={2},pages={1--40},year={2016},html={https://dl.acm.org/doi/abs/10.1145/2888398},publisher={ACM New York, NY, USA}}
Journal
Optimizing the Lifetime of Sensor Networks with Uncontrollable Mobile Sinks and QoS Constraints
In past literature, it has been demonstrated that the use of mobile sinks (MSs) increases dramatically the lifetime of wireless sensor networks (WSNs). In applications where the MSs are humans, animals, or transportation systems, the mobility of the MSs is often uncontrollable and could also be random and unpredictable. This implies the necessity of algorithms tailored to handle uncertainty on the MS mobility. In this article, we define the lifetime optimization of a WSN in the presence of uncontrollable sink mobility and Quality of Service (QoS) constraints. After defining an ideal scheme (called Oracle) which provably maximizes network lifetime, we present a novel Swarm-Intelligence-based Sensor Selection Algorithm (SISSA), which optimizes network lifetime and meets predefined QoS constraints. Then we mathematically analyze SISSA and derive analytical bounds on energy consumption, number of messages exchanged, and convergence time. The algorithm is experimentally evaluated on practical experimental setups, and its performances are compared to that by the optimal Oracle scheme, as well as with the IEEE 802.15.4 MAC and TDMA schemes. Results conclude that SISSA provides on the average the 56% of the lifetime provided by Oracle and outperforms IEEE 802.15.4 and TDMA in terms of yielded network lifetime.
@article{restuccia2016optimizing,abbr={Journal},title={Optimizing the Lifetime of Sensor Networks with Uncontrollable Mobile Sinks and QoS Constraints},author={Restuccia, Francesco and Das, Sajal K},journal={ACM Transactions on Sensor Networks (TOSN)},bibtex_show={true},volume={12},number={1},pages={1--31},year={2016},html={https://dl.acm.org/doi/abs/10.1145/2873059},publisher={ACM New York, NY, USA}}
Conference
RescuePal: A Smartphone-Based System to Discover People in Emergency Scenarios
Restuccia, Francesco, Thandu, Srinivas Chakravarthi, Chellappan, Sriram, and Das, Sajal K
In 2016 IEEE 17th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM) 2016
In emergency scenarios such as earthquakes, fires, avalanches, or building collapses, it is necessary to discover people trapped under debris or anyway hidden from eyesight. In this paper, we propose RescuePal, an energy-efficient smartphone-based system that does not require any interaction by the victim and does not use energy-expensive GPS. RescuePal leverages a wake-up system based on sounds that activates the WiFi interface of the victim’s smartphone only when the rescuer is close, to save energy. After presenting the system, we mathematically formulate an optimization problem so as to find the sound frequency and power level that minimizes WiFi false activations and yet guarantees high discovery efficiency. RescuePal has been implemented on off-the-shelf Android-based devices, and its performance has been evaluated on a realistic use-case scenario of victims inside a building. Finally, the energy consumption of RescuePal has been calculated using the Power Monitor hardware tool. Results demonstrate that RescuePal is highly effective and saves more than 60% of energy with respect to an approach based only on WiFi.
@inproceedings{restuccia2016rescuepal,abbr={Conference},title={RescuePal: A Smartphone-Based System to Discover People in Emergency Scenarios},author={Restuccia, Francesco and Thandu, Srinivas Chakravarthi and Chellappan, Sriram and Das, Sajal K},booktitle={2016 IEEE 17th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM)},bibtex_show={true},pages={1--6},year={2016},html={https://ieeexplore.ieee.org/abstract/document/7523566},organization={IEEE}}
2015
Journal
Quality-of-Service Implications of Enhanced Program Algorithms for Charge-Trapping NAND in Future Solid-State Drives
Three-dimensional nand memory devices based on charge trapping (CT) technology represent the most promising solution for hyperscaled solid-state drives (SSDs). However, the intrinsic low reliability offered by that storage medium leads to a high number of errors requiring an extensive use of complex error correction codes (ECCs) and advanced read algorithms such as read retry. This materializes in an overall reduction in the SSD’s QoS. In order to limit the error number, enhanced program algorithms that are able to improve the reliability figures of CT memory devices have been introduced. In this paper, the impact of such program algorithms combined with read retry and the ECC is experimentally characterized on CT- nand arrays. The results are then exploited for cosimulations at the system level, assessing the reliability, performance, and QoS of future SSDs integrating CT-based memory devices.
@article{grossi2015quality,abbr={Journal},title={Quality-of-Service Implications of Enhanced Program Algorithms for Charge-Trapping NAND in Future Solid-State Drives},author={Grossi, Alessandro and Zuolo, Lorenzo and Restuccia, Francesco and Zambelli, Cristian and Olivo, Piero},journal={IEEE Transactions on Device and Materials Reliability},bibtex_show={true},volume={15},number={3},pages={363--369},html={https://ieeexplore.ieee.org/abstract/document/7130589},year={2015},publisher={IEEE}}
Conference
Lifetime Optimization with QoS of Sensor Networks with Uncontrollable Mobile Sinks
Restuccia, Francesco, and Das, Sajal K
In 2015 IEEE 16th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM) 2015
In past literature, it has been demonstrated that the use of mobile sinks (MSs) increases dramatically the lifetime of wireless sensor networks (WSNs). In applications where the MSs are humans, animals, or transportation systems, the mobility of the MS is often random and unpredictable, implying the necessity of novel and specific algorithms able to deal with large uncertainty on the MS mobility. In this paper, we define the yet unsolved problem of optimizing the lifetime of a WSN in the presence of uncontrollable and random sink mobility with QoS constraints. Then, we present a novel Swarm-Intelligence-based Sensor Selection Algorithm (SISSA), which optimizes network lifetime and meets pre-defined QoS constraints. Next, we mathematically analyze SISSA and derive analytical bounds on energy consumption, number of messages exchanged, and convergence time. The efficiency of SISSA and the accuracy of the model are experimentally evaluated with a testbed composed by 40 sensors, and the network lifetime provided by SISSA is compared to that by an ideal scheme. Experimental and analytical results conclude that SISSA is highly scalable and energy-efficient, and provides on the average the 56% of the lifetime provided by the ideal scheme in all the considered network parameter sets.
@inproceedings{restuccia2015lifetime,abbr={Conference},title={Lifetime Optimization with QoS of Sensor Networks with Uncontrollable Mobile Sinks},author={Restuccia, Francesco and Das, Sajal K},booktitle={2015 IEEE 16th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM)},bibtex_show={true},pages={1--9},html={https://ieeexplore.ieee.org/abstract/document/7158130},year={2015},organization={IEEE}}
Conference
Preserving QoI in Participatory Sensing by Tackling Location-Spoofing Through Mobile WiFi Hotspots
Restuccia, Francesco, Saracino, Andrea, Das, Sajal K, and Martinelli, Fabio
In 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops) 2015
The Quality of Information (QoI) in Participatory Sensing (PS) systems largely depends on the location accuracy of participating users. However, users could easily provide false information through Location Spoofing Attacks (LSA). Existing PS systems are not able to efficiently validate the position of users in large-scale outdoor environments, thus being prone to reduced QoI. In this paper we present an efficient scheme to secure PS systems from LSAs. In particular, the user location is verified with the help of mobile WiFi hot spots (MHSs), which are users activating WiFi interface on their smart phones and waiting connections from nearby users, and thereby validating their position inside the sensing area. A reputation-based algorithm is proposed to rule out sensing reports of location-spoofing users, thereby increasing the reliability of the PS system. The effectiveness of our scheme is analyzed by real-world experiments and simulation study.
@inproceedings{restuccia2015preserving,abbr={Conference},title={Preserving QoI in Participatory Sensing by Tackling Location-Spoofing Through Mobile WiFi Hotspots},author={Restuccia, Francesco and Saracino, Andrea and Das, Sajal K and Martinelli, Fabio},bibtex_show={true},booktitle={2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops)},pages={81--86},year={2015},html={https://ieeexplore.ieee.org/abstract/document/7133998},organization={IEEE}}
2014
Journal
Analysis and Optimization of a Protocol for Mobile Element Discovery in Sensor Networks
Restuccia, Francesco, Anastasi, Giuseppe, Conti, Marco, and Das, Sajal K
Recent studies have demonstrated that mobile elements (MEs) are an efficient solution to help decrease dramatically energy consumption in wireless sensor networks (WSNs). However, in most of cases, sensors use duty cycle schemes to save energy, and unless the ME mobility pattern is deterministic, each sensor node has to discover the presence of the ME in the nearby area before starting to exchange data with it. Therefore, in such wireless sensor networks with mobile elements (in short, WSN-MEs), the definition and analysis of a protocol for efficient ME discovery becomes of fundamental importance. In this paper, we propose an extensive performance analysis of an easy-to-implement, hierarchical discovery protocol for WSN-MEs, called Dual Beacon Discovery (2BD) protocol, taking into account stochastic, multi-path, variable speed ME mobility patterns. We also derive the optimal parameter values that minimize the energy consumption of sensor nodes, while guaranteeing the minimum node throughput required by the applications under consideration. Finally, we compare the 2BD protocol with a classical solution based on Periodic Listening (PL). Our results show that 2BD can exploit its hierarchical mechanism and thus significantly increase lifetime, especially when the ME discovery phase is relatively long.
@article{restuccia2013analysis,abbr={Journal},title={Analysis and Optimization of a Protocol for Mobile Element Discovery in Sensor Networks},author={Restuccia, Francesco and Anastasi, Giuseppe and Conti, Marco and Das, Sajal K},bibtex_show={true},journal={IEEE Transactions on Mobile Computing},volume={13},number={9},pages={1942--1954},year={2014},html={https://ieeexplore.ieee.org/abstract/document/6560040},publisher={IEEE}}
Conference
FIDES: A Trust-Based Framework for Secure User Incentivization in Participatory Sensing
Restuccia, Francesco, and Das, Sajal K
In Proceeding of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks 2014 2014
Participatory sensing (PS) has recently attracted tremendous attention given its potential for a wide variety of sensing applications. Due to the fact that PS systems rely completely on the data provided by the users, incentivizing users’ active participation while guaranteeing data reliability is paramount to effectively employ PS systems in practical scenarios. In this paper, we first define a set of attacks which compromise data reliability of existing PS applications. Next, we propose a scalable and secure trust-based framework, called FIDES, which relies on the concept of mobile security agents (MSAs) and Josang’s trust model to rule out incorrect reports and reward reliable users. By simulating the FIDES framework on mobility traces of taxi cabs in San Francisco, we demonstrate that FIDES secures the PS system from the proposed attacks, guarantees high data reliability, and saves significant amount of revenue with respect to existing reward mechanisms.
@inproceedings{restuccia2014fides,abbr={Conference},title={FIDES: A Trust-Based Framework for Secure User Incentivization in Participatory Sensing},author={Restuccia, Francesco and Das, Sajal K},booktitle={Proceeding of IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks 2014},bibtex_show={true},pages={1--10},year={2014},html={https://ieeexplore.ieee.org/abstract/document/6918972},organization={IEEE}}
2012
Conference
A Hybrid and Flexible Discovery Algorithm for Wireless Sensor Networks with Mobile Elements
Kondepu, Koteswararao,
Restuccia, Francesco, Anastasi, Giuseppe, and Conti, Marco
In 2012 IEEE Symposium on Computers and Communications (ISCC) 2012
n sparse wireless sensor networks, data collection is
carried out through specialized mobile nodes that visit sensor
nodes, gather data, and transport them to the sink node. Since
visit times are typically unpredictable, one of the main
challenges to be faced in this kind of networks is the energy-
efficient discovery of mobile collector nodes by sensor nodes. In
this paper, we propose an adaptive discovery algorithm that
combines a learning-based approach with a hierarchical
scheme. Thanks to its hybrid nature, the proposed algorithm is
very flexible, as it can adapt to very different mobility patterns
of the mobile collector node(s), ranging from deterministic to
completely random mobility. We have investigated the
performance of the proposed approach, through simulation,
and we have compared it with existing adaptive algorithms that
only leverage either a learning-based or a hierarchical
approach. Our results show that the proposed hybrid algorithm
outperforms the considered adaptive approaches in all the
analyzed scenarios.
@inproceedings{kondepu2012hybrid,abbr={Conference},title={A Hybrid and Flexible Discovery Algorithm for Wireless Sensor Networks with Mobile Elements},author={Kondepu, Koteswararao and Restuccia, Francesco and Anastasi, Giuseppe and Conti, Marco},bibtex_show={true},booktitle={2012 IEEE Symposium on Computers and Communications (ISCC)},pages={000295--000300},year={2012},html={https://ieeexplore.ieee.org/abstract/document/6249311},organization={IEEE}}
Conference
Performance Analysis of a Hierarchical Discovery Protocol for WSNs with Mobile Elements
Restuccia, Francesco, Anastasi, Giuseppe, Conti, Marco, and Das, Sajal K
In 2012 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM) 2012
Wireless Sensor Networks (WSNs) are emerging as an effective solution for a wide range of real-life applications. In scenarios where a fine-grain sensing is not required, sensor nodes can be sparsely deployed in strategic locations and special Mobile Elements (MEs) can be used for data collection. Since communication between a sensor node and a ME can occur only when they are in the transmission range of each other, one of the main challenges in the design of a WSN with MEs is the energy-efficient and timely discovery of MEs. In this paper, we consider a hierarchical ME discovery protocol, namely Dual beacon Discovery (2BD) protocol, based on two different beacon messages emitted by the ME (i.e., Long-Range Beacons and Short-Range Beacons). We develop a detailed analytical model of 2BD assuming a sparse network scenario, and derive the optimal parameter values that minimize the energy consumption at sensor nodes, while guaranteeing the minimum throughput required by the application. Finally, we compare the energy efficiency and performance of 2BD with those of a traditional discovery protocol based on a single beacon. Our results show that 2BD can provide significant energy savings, especially when the discovery phase is relatively long.
@inproceedings{restuccia2012performance,abbr={Conference},title={Performance Analysis of a Hierarchical Discovery Protocol for WSNs with Mobile Elements},author={Restuccia, Francesco and Anastasi, Giuseppe and Conti, Marco and Das, Sajal K},booktitle={2012 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)},bibtex_show={true},pages={1--9},year={2012},html={https://ieeexplore.ieee.org/abstract/document/6263708},organization={IEEE}}
2011
Journal
Energy Efficiency in Wireless Sensor Networks with Mobile Elements
Restuccia, F, Kondepu, K, Anastasi, Giuseppe, and Conti, G
@article{restuccia2011energy,abbr={Journal},title={Energy Efficiency in Wireless Sensor Networks with Mobile Elements},author={Restuccia, F and Kondepu, K and Anastasi, Giuseppe and Conti, G},bibtex_show={true},year={2011},html={https://arpi.unipi.it/handle/11568/145880}}