Student Research Presentation Videos
Research Presentation Videos
Accurate Atmospheric Density Prediction for Efficient Planning of Mars Aerobraking Maneuvers
Amrutha Dasyam and Pardhasai Chadalavada
Accurate atmospheric density prediction is crucial for a successful aerobraking maneuver that precedes in planetary mission operations around Mars. An aerobraking maneuver involves using Martian atmosphere over multiple passages to decelerate the spacecraft for eventual capture in a low-altitude science orbit. Long communication delays due to large Earth-Mars distance necessitates the onboard accurate computation of atmospheric density, which is affected by several factors: latitude, local time, solar activity, as well as dust storms. To this end, we propose a neural network-based methodology that uses the spacecraft states and the prevailing density data during the current atmospheric passage to estimate the atmospheric density during the next revolution. In the absence of flight data, we use trajectories propagated in NASA Mars Global Reference Atmospheric Model software (provided by our University of Kansas collaborators) that captures the effects of Martian atmospheric conditions. Our numerical simulations indicate that, given information about a current atmospheric passage (randomly selected revolution), the atmospheric density for the forthcoming revolution can be predicted with an accuracy of around 97%.
Predicting graft survival time in liver transplant patients using hybrid deep learning algorithms
Gokcen Akgun
Liver Transplant remains to be the only option for those who have an end stage liver disease. However, due to scarcity of available organs, transplant decisions must be made with utmost care. Through the vast use of Electronic Health Records data and Machine Learning algorithms, transplant decisions are far more well-grounded than ever. In this study, we introduce a hybrid deep learning model to predict graft survival time in liver transplant patients by considering both static and dynamic variables from patients. We demonstrate the proposed deep learning algorithm’s performance against more conventional models such as Cox Regression using UNOS data. Our findings suggest that a deep learning model that incorporates longitudinal information of liver transplant patients outperforms its counterparts.
Deep Learning Models for Obesity Classification using Facial Images
Hera Siddiqui
Obesity is one of the most challenging healthcare problems that the world is facing today. The most common method of identifying obesity in adults is Body Mass Index (BMI) ratio that is defined as the weight in kilograms divided by the square of the height in meters (kg/m2). Overweight individuals have a BMI between 25-30, and those over 30 are classified as obese. Recent studies suggest that BMI can be inferred from facial images using deep learning based convolutional neural networks (CNNs) for obesity classification with about 85%-90% accuracy. We investigated five CNNs, three lightweight (LightCNN-29, MobileNet-V2, and ShuffleNet-V2) and two heavyweight CNNs (VGG-16, ResNet-50) to this effect. These CNNs were first trained on publicly available VGGFace2 dataset, fine-tuned on FIW-BMI, and tested using VisualBMI dataset annotated with BMI information. Overall accuracy (percentage of correct obese and non-obese classifications) ranged from 82.53% (ShuffleNetv2) to 91.16% (ResNet50). No significant difference in accuracy was observed between males and females. The range of Area Under the Curve (AUC) was 0.89-0.96, with the lowest for ShuffleNet-V2 and highest for ResNet-50. On device deployment offers better inference time and privacy but with a drop in accuracy. Different schemes for model compression need to be evaluated to study the trade-off between model size and the accuracy obtained. Studies suggest that AI systems are biased towards people of color, especially dark-skinned women. As a part of future work, we would like to investigate if this bias exists in obesity classification models as well.
Quantum Neural Network Training of A Quantum Repeater Node
Jackson Dahn
Quantum computation is a rapidly developing field, however the problem of efficient, scalable, and robust quantum algorithm design is proving to be extremely difficult. Our research group, led by Dr. Elizabeth Behrman and Dr. James Steck uses machine learning to design algorithms for specific quantum computing tasks. The qRepeater is one such application, aimed to both demonstrate the ability of a quantum neural net (QNN) to train an n-qbubit swap gate and to show the algorithm created is more robust than traditional techniques Swap gates are used as quantum repeater nodes, which boost the range photons in fiber optics can carry quantum information and have uses in virtually all quantum communication applications. The qRepeater is trained using density matrices of the input and output states, being fed into a Hamiltonian composed of tunneling, bias, and coupling parameters that are trained by a Levenberg-Marquardt algorithm. The input density matrix is input into the Hamiltonian and integrated over a set time. The rms of the fidelity between the output density matrices and each training output state is then used as the measure to adjust the parameters of the Hamiltonian. The project has been successful for the two and three qubit cases so far, and we are currently working on a four qubit case. After success on the four qubit case we will turn on a noise model for the system and retrain the algorithm, comparing the robustness to traditional implementations.
House Price Prediction
Klarina Paulisch
The prediction of house prices provides valuable insights for a homeowner and for buyers, which can assist their decision-making. For this reason, we aimed to develop a general model to accurately predict house prices based on characteristics concerning the house´s value and not market-based factors. We used a data set containing the sales price as the dependent variable, 79 independent variables and among them 40 categorical variables. Since the dependent variable is continuous and our data set covers many independent variables, we have conducted a stepwise linear regression in combination with the five-fold cross-validation to validate the model. Due to statistical limits, we could not average the five models and choose the best-performing model based on the highest R² and lowest Standard Error of Estimate. The model predicts 92% of the variance of the dependent variable with a Standard Error of Estimation of 23.000 and consists of 54 independent variables. Despite the high prediction quality, our data contains a location and a time bias because we only used data from Ames from 2007-2010. However, the regression equation provides beneficial insights such as variables increasing the house price, like the overall or kitchen quality, and variables decreasing the house price, like an unfinished garage or deductions.
Convolutional Neural Net Application in NOvA
Momin Khan
The NOvA experiment focuses on the study of neutrinos and other subatomic particles through the use of the FermiLab particle detectors. Neutrinos in particular are evasive particles since they do not interact normally with matter. They also do not have a charge, making them difficult to detect with traditional methods. Hence, NOvA uses a computer algorithm to reconstruct the energy and other relevant information about neutrinos through the use of prongs that the algorithm creates after looking at images of the particle events and it determines where the vertex of the interaction should be placed. This however, is inaccurate many times due to many different factors. The use of a convolutional neural network was implemented in order to mitigate errors and provide a more accurate assessment of the vertex location and cvnmap information.
Spatiotmeporal Access to Healthy Food in Sedgwick County
Rupert Nunez
Healthy food access and the local food environment have become an important issue for city and state governments. Gaining academic attention in the 1990s, studies have been conducted across the world documenting the food environment at local, state and country wide scales. The purpose of this study was to create a detailed analysis of healthy food availability in Sedgwick county in terms of physical distance and time. Using Store data found from Google Maps the GPS coordinates of 71 stores and markets with produce available in Sedgwick County were recorded. We then gathered Census tract shapefiles from the census bureau and street data using the county database. ArcGIS was then used to construct a road network and perform a service area analysis. Separate maps were used for accessibility at chosen time frames. The results were represented graphically using ArcGIS to create custom maps. The results of this study show that for the time component, healthy food is widely available during the most popular shopping times; location is a different story. Rural areas experience much longer drive times to get to healthy food. Western Sedgwick County has the longest average distance calculated.
Analysis of Social Media Data using Multimodal Deep Learning for Disaster Response
Saideshwar Kotha
Multimedia data such as text and image content in social media provides significant information during disaster events. However, existing research has been mostly focused on analyzing text modality for disaster response. In this work, we use both text and image modalities from social media and their combination using state-of-the-art deep learning techniques for disaster response detection. Extensive experiments were conducted on the CrisisMMD dataset using deep learning models namely, VGG16, VGG19, ResNet, XceptionNet, DenseNet, and InceptionNet. Our experimental results suggest that the multimodal architecture combining both text and image modality yields better performance than models trained on a single modality (either text or image). Experiments results suggest an average accuracy of 68.83% for Informative and NonInformative category, and 61.33% for humanitarian category.
Automatic Assessment and Prediction of the Resilience of Utility Poles Using Unmanned Aerial Vehicles and Computer Vision Techniques
Santhosh Jothimani and Lalith Kumar Varma Koneti
Due to hazardous weather events, the utility poles of electric power distribution lines might fail which results in power outages and consequently in adverse economic and social consequences. So, the maintenance of utility poles should be done regularly by the utility companies, in order for the distribution system to be operating continuously and safely. Our research is about poles monitoring methods using deep learning and computer vision methods. Using a drone, the images of the utility poles are captured and processed automatically to find the pole inclination angles. So, the research is about calculating the bending moment exerted on the poles due to winds and gravitational forces, as well as cable weight, to compare it with the potential moment of rupture. From the resulting angles, using a support vector machine, the resilience of the poles is classified into three categories like resilient (0°<= angle <15°), moderately resilient (15°<= angle < 25°) and non-resilient (25°<= angle ). This automated system of finding the resilience condition of the utility poles helps increase the resilience of the utility companies systems. This research has been done within the Disaster Resilience Analytics Center (DRAC) that aims to combine big data and machine learning to analyze the interactions between infrastructural, economic, and social elements of communities related to disaster prediction and risk reduction.
Forest Fire Detection and Localization using Lightweight Network
Smitha Haridasan and Saideshwar Kotha
As the global climate gets warmer, probability and intensity of forest fires increase, therefore it is important to intelligently monitor forest fires. Forest fires not only cause serious economic losses and destroy ecological environment, but also pose a great threat to the safety of human life. Forest fires spread quickly and are difficult to control in a short time. Therefore it is imperative to detect early forest fire before it spreads out, but traditional methods have obvious drawbacks in detecting fire in open forest areas. Sensor based systems perform well indoors but fail outdoors. The use of fire monitors based on computer vision technology mounted on Unmanned Aerial Vehicles has been expanding rapidly. In recent years, deep learning models have been used successfully in almost every field including industry and academia, especially for Computer Vision tasks. Powerful deep learners extract much deeper semantic information than traditional Image processing. However, these models are huge in size, with millions (and billions) of parameters, and thus cannot be deployed on devices with limited resources (e.g., drones). Moreover, due to the shape, texture and colors of fire, forest fire detection is a challenging task. To meet the needs of embedded forest fire monitoring systems using unmanned aerial vehicles (UAV), light weight deep learning algorithms were used in our experiments for classification of images into fire and normal images. We derived a heterogeneous data set to evaluate the performance of classification. The fire images are fed to an object detection algorithm (yolov3) to localize fire in the images. Lots of literature has reported high accuracy and lower false detection rate. Yolov3 helps to improve false detection rate. Experiments of light weight models on our data set shows an average fire image classification accuracy of 88.71% and fire localization mAP (50%) of 42.10%, IOU of 37.39%