The Intelligent Mobile Systems Laboratory at University of Pittsburgh regularly releases open-source software and makes a practice of releasing source codes with our published papers. The platforms running our software span from software-defined radios and embedded systems to mobile OS such as Android.

 

ElasticTrainer: Speeding up on-device neural network training on weak embedded devices

On-device training is essential for neural networks (NNs) to continuously adapt to new online data, but can be time-consuming on resource-constrained embedded devices. To speed up on-device training, existing schemes select trainable NN portion offline or conduct unrecoverable selection at runtime, but the evolution of trainable NN portion is constrained and cannot adapt to the current need for training. Instead, runtime adaptation of on-device training should be fully elastic, i.e., every NN substructure can be freely removed from or added to the trainable NN portion at any time in training. This artifact realizes such full elasticity in on-device training and implements our work published in MobiSys 2023.

For more details, please check https://github.com/HelloKevin07/ElasticTrainer.

 

AgileNN: Agile neural network offloading with Explainable AI for real-time inference on extremely weak embedded devices

With the wide adoption of AI applications, there is a pressing need of enabling real-time neural network (NN) inference on small embedded devices, but deploying NNs and achieving high performance of NN inference on these small devices is challenging due to their extremely weak capabilities. A solution to addressing this constraint is to offload the NN computations to a cloud server, but existing NN partitioning schemes will need to use an expensive local neural network to enforce feature sparsity and hence minimize the amount of data being transmitted to the server. To address this limitation, our work shifts the rationale of NN partitioning from fixed to agile and data-centric. Our basic idea is to incorporate the knowledge about different input data's heterogeneity in training, so that the required computations to enforce feature sparsity are migrated from online inference to offline training. More specifically, we interpret such heterogeneity as different data features' importance to NN inference, and leverage the eXplainable AI (XAI) techniques to explicitly evaluate such importance during training. In this way, the online inference can enforce feature sparsity by only compressing and transmitting the less important features, without involving expensive NN computations. This artifact implements our work published in MobiCom 2022.

For more details, please check https://github.com/HelloKevin07/AgileNN.

 

A side channel attack on Qualcomm Snapdragon mobile GPUs via GPU performance counters

Malicious attacks against smartphones have recently become a major technical concern. Mobile hardware attacks exploit unintended information leakage from system hardware, and are hence difficult to eliminate as hardware upgrade cannot be easily done on commodity devices. While most existing hardware eavesdropping attacks on smartphones mainly focus on CPU and on-board sensors, our work presents a new eavesdropping attack targeting onmobile GPUs that allows an unprivileged attacker to precisely infer the user's credential inputs through the on-screen keyboard. Our basic rationale is the explicit correlation between user inputs and screen display on smartphones: on one hand, user inputs are always reflected into screen display as visible feedback; on the other hand, display contents are always rendered by GPU, and GPU is solely used for graphics rendering in most cases. Based on this rationale, we found that GPU performance counters (PCs) in certain categories reflect the amount of screen display changes at the granularity of individual pixels. This explicit and fine correlation allows direct eavesdropping without any ambiguity. This artifact implements our work published in ASPLOS 2022.

For more details, please check https://doi.org/10.5281/zenodo.5733423.

 

TransFi: a software solution to fine-grained custom wireless PHY signal emulation with commodity WiFi

New wireless physical-layer designs are the key to improving wireless network performance. Adopting these new designs, however, requires modifications on wireless hardware and is difficult on commodity devices. This difficulty results in slow adoption of new wireless PHY techniques, and is also the major reason for the gap between lab prototypes and actual wireless systems in use. To avoid such hardware modification, one approach is to selectively transmit a commodity wireless signal that best approximates to the target signal in the newPHY design. However, since each commodity wireless hardware can only produce a finite number of fixed PHY signal waveforms, this method is too coarse-grained to precisely approximate to the target signal that may arbitrarily appear in custom wireless PHY, and will result in large and uncontrollable approximation error when transmitting high-speed data frames. Instead, we envision that this constraint of commodity wireless hardware can be removed by fine-grained emulation, which mixes multiple commodity wireless signals with adaptively selected amplitudes and phases on the air. Based on this insight, we developed TransFi, a software technique that enables custom wireless PHY functionalities on commodity WiFi transmitters. The TransFi software computes the commodity WiFi's MAC payload based on the target signal being emulated, and passes the computed MAC payloads to the WiFi PHY layer to produce the target signal. More specifically, it considers the target signal in each wireless symbol as a custom point on the complex plane, and selects a set of commodity QAM constellation points whose geometric mixture matches the custom point. Each selected QAM constellation point is then reversely computed to the MAC payload of one MIMO stream, by mimicking the WiFi data decoding process. This artifact implements our work published in MobiSys 2022.

For more details, please check https://doi.org/10.5281/zenodo.6616718.

 

A generic resource sharing framework between remote mobile systems

Remote resource access across mobile systems augments local mobile devices' capabilities, but is challenging due to the heterogeneity of mobile hardware. Instead of tackling with the low-layer drivers, I/O stacks and data access interfaces of individual hardware, we developed a software framework that exploits the existing OS services as the interface for remote resource access. This framework is implemented as a middleware in Android OS, and supports generic sharing of various hardware (GPS, accelerometer, audio speaker, camera) between remote mobile devices. It can be migrated to diverse types of devices, including smartphones, tablets and smartwatches, with minimum modifications. This software implements our work published in INFOCOM 2017.

For more details, please check https://github.com/UtkMSNL/sharing_android.