Our stellar programs attract oustanding students from around the world who work closely with our faculty to advance state-of-the-art research in computing technologies. We attribute our success to a strong tradition of collaborative research, close working relationships with local industries, state-of-the-art facilities and a dedicated committment to student achievement.
We offer bachelor of science degrees in computer science (CS), computer engineering (CPE), and a master's degree in software engineering (MSE). Our MSE program is unique within the UW system and offers graduates very desiriable employment opportunities. We also offer a very popular dual-degree program that awards students a BS in Computer Science and a Master's in Software Engineering within a condensed time frame of only five years. If you have any questions please contact us at email@example.com.
Two CS student organizations, Makeshift and Coders, were well represented at Eagle Fest this fall. Representing the Makeshift organization are, from left to right, Creed Zagrebski, Chris Richardson, and Bennett Wendorf. Representing Coders are Emma Iverson and William Schauberger.
Coders focuses on Community outreach, Campus in-reach, providing support to students as well as promoting their professional skills to create a better CS student. And finally, Diversity, promotion of all minorities in CS as well as to be a support group for minority students. The group meets weekly on Tuesday at 3:30pm in Wing 102.
Makeshift gives students access to tools and training for working on electronics, 3D printing, and other coding projects. The club is a social atmosphere that embraces the spirit of creation. More importantly, the club gives you the training and access to equipment to make your creative ideas possible. Additionally, this club provides an opportunity for any non-CS major to learn and tinker with subjects outside their major.
CS major Eric Jahns presented a poster at the 32nd Annual Wisconsin Space Grant Consortium summarizing his research with Dr. Dipankar Mitra this summer. The title is "Towards Machine Learning (ML)-based Modeling of Wireless Signal Degradation while Transmitting from Base-stations to LEO Satellites." An abstract is given below.
Communicating via electromagnetic signals from base-stations to satellites comes with a multitude of problems such as transmission power, antenna gain, atmospheric absorption loss, and many others that can all lead to signal degradation by the time it reaches its target. In this research, we focus on modeling the wireless signal degradation by using machine learning (ML) tools to develop a predictable pattens of these signal losses. We will use both ML tools, such as, Recurrent Neural Networks (RNN) and Deep Recurrent Neural Networks (DRNN) based on the long short-term memory (LSTM) architecture due to their excellent performance in signal processing and their ability to learn the dynamics of sequential data efficiently. These models will be trained on both degraded and non-degraded signals using finite element method (FEM)-based COMSOL Multiphysics tool.
Two research articles by Dr. Dipankar Mitra have been accepted for conference publication and presentation in the 2022 IEEE RAPID Conference (Research and Applications of Photonics in Defense Conference). The work entitled Neuronal Modeling Tool using DynaSim was the result of ongoing collaboration with North Dakota State University and the Air Force Research Lab. See the abstract below.
Human cognitive processes remain an area of strong interest and ongoing research. One tool to gain greater insight into this process is neuronal modeling. The following features are desirable in a neuronal modeling tool: a library of known parameters for different neurons and species, the ability to select the neuron and species of interest, the ability to quickly simulate neuronal behavior, and the ability to calculate metabolic requirements and efficiencies for various neuronal activity. Many of these can be found in software today, but not all elements can be found in one entity. The goal of this work is to develop a neuronal modeling tool that incorporates these features.
Two research articles by Dr. Dipankar Mitra have been accepted for conference publication and presentation in the 2022 IEEE RAPID Conference (Research and Applications of Photonics in Defense Conference). The work entitled Modeling the Self-Capacitance of Individual Plates in a Multi-Conductor System using PEEC was the result of ongoing collaboration with North Dakota State University and the Air Force Research Lab. See the abstract below.
In many areas of electronics design, it is necessary to understand the different aspects of capacitance associated with various conducting surfaces in a particular layout. This is because as operating frequencies increase and dimensions decrease, capacitive coupling can become the dominant means in which noise is induced in a design. Therefore, having the ability to extract an equivalent circuit for modeling capacitive coupling and understanding the coupling can be very important, and challenging. This paper presents insight on capacitive coupling by evaluating the capacitance between two parallel plates, with particular attention paid to the self-capacitance (with a reference at infinity) of each of the individual plates. More specifically, the Partial Element Equivalent Circuit (PEEC) method is used to compute the self-capacitance of each individual plate in the parallel plate capacitor problem, and results are verified by comparison to values from James Clerk Maxwell’s original works and the electrostatic solver ANSYS Maxwell 3D. Overall, it is shown how the self-capacitance of each individual plate changes as a function of distance between the parallel plates.
Adam Grunwald has received a Dean's Distinguished Fellowship to work with Dr. Elliott Forbes on heterogeneous multicore processor design over the summer months. His research topic is described in more detail below. Congratulations Adam!
This project revolves around single-ISA heterogeneous multicore processor design. Multicore systems have multiple processor cores, each of which can execute a program independent of the other cores. Most multicore processors have cores of the same architecture -- that is, their cores are homogeneous. Generally, the homogeneous cores have an architecture that performs adequately across the wide variety of programs. However, some programs have computational needs that are outliers. Heterogeneous multicore processors have cores with different architectures. Some of the heterogeneous cores may have architectures that perform adequately for most programs. But some of the heterogeneous cores can have architectures that are atypical, hopefully matching the computational needs of any outlier program behavior.
One of the challenges of heterogeneous processors is to determine on which core a given program should run. Ideally, this decision is made ahead of the execution of the program because a mismatch between a programs computational needs compared to the architecture ran lead to a lost performance opportunity. The goals of this project are to analyze the runtime behavior of processor benchmark programs to find their computational bottlenecks. This can be done through simulation. A processor simulator can be configured such that processor hardware resources are unrealistically large. Then each hardware resource type can be reduced, one-at-a-time. If the runtime performance doesn't change due to the reduction in a particular hardware resource, then that resource is not a bottleneck for the benchmark program. However, if the performance degrades as a result of reducing the hardware resource, then that resource is the bottleneck for that program. Understanding the computational bottlenecks of benchmark programs can then lead toward more accurate steering of programs to heterogeneous cores.
Christian Strauss recently completed his MSE capstone project entitled Audio Canvas: An Audio Visualization Tool. The video below presents a brief summary of his work. This project created a web application to create visualizations of audio streams such that the visualizations support 3D objects, meshes, and 2D text and textures. Nice work Christian!
Becky Yoshizumi, CS ADA, was recently awarded a University Staff Professional Development Grant from the University Staff Council. The grant is sponsoring a speaker related to understanding how generational differences affect student, staff and faculty perspectives on aspects of work and life. The Employment Enrichment Day Committee helped to organize the event and provides the details given below.
Ever wonder why Millennials and Gen Z colleagues/students often have a different perspective on things? Steve Bench, founder of Generational Consulting in Madison, has answers. Bench will present “Attracting Tomorrow’s Talent with Today’s Leaders” at 10 a.m. Wednesday, May 25, in 1309 Centennial Hall. The talk will be preceded by a reception to reconnect and network from 9-10 a.m. in Hall of Nations, Centennial Hall. The events are organized by the Employee Enrichment Committee.
The keynote will focus on talent attraction and workforce retention by building understanding of who we are, how we were raised, and how each generation views “work” as a part of their identity. Examples can also apply to working with students from different generations. Bench will provide an overview of talent attraction and retention strategies to overcome generational differences and attract Millennial and Gen Z employees and keep them from leaving. Adulthood has changed, and depending on life stages, some may prioritize lifestyle over career. Bench will provide tips on how to manage and motivate someone who may not be as committed to their job as in previous generations.
Dr. David Mathias has had an article accepted for publication in the highly regarded ACM Transactions on Evolutionary Learning and Optimization journal. The paper is co-authored by Dr. Annie Wu of the University of Central Florida and Daniel Dang, a student at Whitman College. The articles abstract is given below.
In this work, we investigate the application of a multi-objective genetic algorithm to the problem of task allocation in a self-organizing, decentralized, threshold-based swarm. We use a multi-objective genetic algorithm to evolve response thresholds for a simulated swarm engaged in dynamic task allocation problems: two-dimensional and three-dimensional collective tracking. We show that evolved thresholds not only outperform uniformly distributed thresholds and dynamic thresholds but achieve nearly optimal performance on a variety of tracking problem instances (target paths). More importantly, we demonstrate that thresholds evolved for some problem instances generalize to all other problem instances, eliminating the need to evolve new thresholds for each problem instance to be solved. We analyze the properties that allow these paths to serve as universal training instances and show that they are quite natural.
After a priori evolution, the response thresholds in our system are static. The problem instances solved by the swarms are highly dynamic, with schedules of task demands that change over time with significant differences in rate and magnitude of change. That the swarm is able to achieve nearly optimal results refutes the common assumption that a swarm must be dynamic to perform well in a dynamic environment.
Two MSE students working under the supervision of Dr. Mao Zheng (Computer Science) and Dr. Song Chen (Mathematics) received the MICS 2022 Best Student Paper award for their paper entitled "A Detection Tool for Traffic Objects". Congratulations on this outstanding work. An abstract of their paper is given below.
This manuscript describes the design and development of a software detection tool for traffic objects. It is a web-based system with a built-in machine learning model. The system allows users to upload images and videos and then detects traffic objects, such as cars, trucks, traffic lights, pedestrians, and bikers. Our machine learning model, using the YOLOv3 algorithm, will process images and videos and return results with the category and location of all detected objects. The results will be stored in the history, and users can then manage the information from there. Most of our data for training the YOLOv3 model came from the Udacity Self Driving Car Dataset. We tried the YOLOv3 model with different backbones such as, Darknet, Mobilenet and Efficientnet. The best combination of both accuracy and speed was obtained using darknet-53, so this network was chosen as our backbone.
To further improve our model’s mAP, we could use a larger dataset scale. However, it will require a longer training process and higher computing power. This manuscript also describes future work on how to incorporate our model as part of a road monitoring and/or self-driving system
Walter Leifeld has received a Dean's Distinguished Fellowship to work with Dr. David Mathias on swarm intelligence research over the summer months. His research topic is described in more detail below. Congratulations Walter!
An artificial swarm consists of a large number of simple agents that must solve a problem, typically through iterative performance of some number of tasks. Because assignment of tasks to agents by a central authority introduces points of failure, swarms are typically decentralized. This means that each agent must determine independently which tasks to perform and when. This problem, known as decentralized task allocation is difficult, and becomes more so when the task requirements are dynamic, but is critical to effective swarm performance. In the problem domains studied, swarm performance increases when trained using a genetic algorithm, with the tradeoff of high training time. Universal training instances have been found within these domains, allowing a swarm trained on one task set to perform well on most others. This avoids costly training time. This research will develop a generalized model that can represent the task allocation requirements of a wide range of applications and explore universal training instances within this model. This will increase our understanding of solving complex problems dependent on large numbers of tasks and the general properties of universal training instances, independent of any one problem domain.