Illinois Reboot Spring 2021 Workshops

In February 2021, the University of Illinois Research Park launched a free technology training program in partnership with Software Carpentry at the University of Illinois with an initiative to develop and enhance programming fundamentals and data analytics skills to elevate the local tech sector workforce in Champaign-Urbana.

This program was instrumental for participants who were lacking programming skills and requisite technical knowledge to take up technical job roles. Also, it sought to reach underrepresented and non-traditional tech populations in the community to help them enter the digital/tech workforce in Champaign-Urbana.

The program included comprehensive instruction along with a real-world exposure to the industrial applications by local tech professionals. Participants were given individual career guidance to help them venture their newly developed skillset.

Description of Workshops

AUTOMATING TASKS WITH THE UNIX SHELL SERIES

Use of the shell is fundamental to using a wide range of other powerful tools and computing resources. This lesson guided participants through the basics of file systems and the shell and started a path towards using these resources effectively.

  • Introducing the Shell/Navigating Files and Directories/Working with Files and Directories
  • Pipes and Filters/Loops
  • Shell Scripts/Finding Things
BUILDING PROGRAMS WITH PYTHON SERIES

Programming is fun! And python makes this much easier. Among several prevalent python libraries, this course chose the booming Pandas library. This introduction to Python was built around a common scientific task: data analysis.

  • Setup/ Working with anaconda/ Basic library functions
  • Storing Values in Lists /Analyzing Data from Multiple Files/Making Choices
  • Creating Functions/Errors and Exceptions/Defense Programming/ Repeating Actions with Loops
  • Leveraging the Dataframe object/ Visualizing Tabular Data
MANAGING DATA WITH SQL SERIES

Three common options for storage are text files, spreadsheets, and databases. Databases include powerful tools for search and analysis and can handle large, complex data sets. These lessons showed participants how to use a database to explore the expeditions’ data.

  • Selecting Data/Sorting and Removing Duplicates/Filtering/Calculating New Values/Missing Data
  • Aggregation/Combining Data/Data Hygiene
  • Creating and Modifying Data/Programming with Databases (Python)
VERSION CONTROl with GIT and GITHUB

Version control is the lab notebook of the digital world: it’s what professionals use to keep track of what they’ve done and to collaborate with other people. Every large software development project relies on it, and most programmers use it for their small jobs as well. In this lesson, participants learned to use Git from the Unix Shell as well as how to navigate GitHub.

  • Command line Git
  • Git Hub – Browser interface
IMAGE PROCESSING

The digital domain would be insipid had it not been for the colorful images. The images constitute a large chunk of the data that is being generated in this age. Hence, studying images becomes crucial. This course introduces the fundamentals of working with images and processing them.

  • What are images and how are they structured?
  • Python libraries to process image data/ basic operations on images/ Plotting images
DATA ANALYSIS

The buzz word for the 21st century is “DATA”. Given the humongous data that is produced, it is essential to gain knowledge on cleaning, maintaining and working it. Machine Learning algorithms come super handy for this purpose, and also, they are super easy to implement using python libraries. This course covers the basics of those algorithms and their implementation.

  • Supervised/Unsupervised machine learning
  • Classify any given problem/ Appropriate algorithm to solve it
  • Clustering/Classification
  • Visualizing the data

Description of Capstone Project:

PYTHON CAPSTONE

This capstone project, “A recommendation system”, was built around using Python. There were data from several sources, and it was required to combine and merge them to get a single dataset. Exploratory data analysis was performed to perform preliminary analysis, gauge facts and figures, detect outliers and missing values. This was followed by data cleaning and data visualization. Finally, the dataset had to be curated to be fed to the recommendation engine.