# Alexei Naumann's Website

This is my robot Freddie Mercury. My name is Alexei Naumann and I'm a freshman Computer Science Student at the University of Southern California.

## Lab 1

This lab consisted of several diagnostics and demonstrations of the Scribbler robot's basic on-board sensors. The results are shown below.

IR Sensor Line Sensor Light Sensor Obstacle Sensor
Alexei 8 inches (1,1) (65408, 65399, 65158) (6400, 6400, 6400)
Sam 7 inches (0,0) (65408, 65285, 65408) (1200, 1930, 1920)
Nick 9 inches (0,0) (47362, 64809, 64622) (5000, 5000, 5000)
Aditya 9 inches (0,0) (65152, 65391,65152) (6400,6400,6400)

The source code used for the Myro Diagnostic can be found here: Source Code

The line sensor test involved testing the reliability of the sensor. Placing the robot on
an off-white floor, it would stop upon encountering the border of a mat (~1 cm thick).
The IR sensor test involved testing the distance at which both sensors would be tripped.
The distance to the obstacle would then be measured.
The Light test involved testing the sensors in different environments. The left sensor was illuminated
with an iphone camera light while the others used the ambient light of SGM 101.
The Obstacle test involved measuring the maximum value the sensor would return when placed as close as
possible to an obstacle.

## Lab 2

The source code used for this Lab can be found here: Source Code

## Lab 3: Fibonacci Sequence

In this lab we implemented algorithm design and mathematics to draw a fibonacci spiral. We draw quarter circles of different radii to draw a connected spiral.

The source code used for this Lab can be found here:

## HW 1: Robots Got Talent

Prelab:

1. The Robot will sing the song "You Spin Me Round" (popularized by the group Dead or Alive). This action will be triggered when a bright light is shined on one of the three light sensors.

2. The Robot will "attempt" to draw a cube. This action will be triggered when the line sensors on the bottom of the robot detect a white surface (the Dry Erase Poster.)

3. The random activity implemented in this design takes digits from 1 to 4 and assigns them each a function. The 1st function, if chosen would make the robot spin in a cirlce, the 2rd would make it move in a square path, the 3rd would print out all the data from the robot's main sensors and the 4th would make the robot move backwards. This portion of the program runs 4 times and picks a random function every time.

4. The program utilizes an if statement to start, if the battery level is above 6.5. If it's not the terminal will read "Battery Level is too Low." It runs through the song first, then the drawing function and then the random function giving time and prompts in between each to allow for user adjustment.

The source code used for this Lab can be found here.

## Lab 4: Insect-Like Behavior

In this lab we implemented algorithms to simulate insect-like behaviors in robots.
The source code used for this Lab can be found here:

## HW 2: Robot Olympics

1. Write a description of your robot’s performance for the Opening Ceremony. What is your algorithm and sensors used for this event?
My robot will be part of the "C." It takes the 9th position in the C. It will take an input, "W," "S, "A," and "D," from the terminal to steer the robot to the correct position.
2. Write a description of your robot’s Line Following behavior. What is your algorithm and sensors used for this event?
When the robot's [1] sensor is triggered (on a black surface/the line), the robot will turn left. When it is not triggered (on a white surface), the robot will turn right.
3. Write a description of your robot’s Maze Solving ability. What is your algorithm and sensors used for this event?
The IR sensors are used to trigger the robot's maze solving algorithm. The robot will continue to move forward until the obstacle sensor is triggered. Then the robot will turn left and then move forward if unblocked or double turn left if blocked again.
4. Write a description of your robot’s Fastest Drawer behavior. What is your algorithm and/or sensors used for this event?
The light sensor is used to trigger the robot's fastest drawer algorithm. Then the robot draws the design that is programmed into it.
5. Write a description of what is your algorithm for structuring your robot team’s behaviors.
There is a menu that takes in a user input to trigger the different functions and algorithms of the performance.
The source code used for this Lab can be found here.

## Lab 5: Arrays

The purpose of this lab was to create an array of songs and functions which could then be utilized to make a playlist of functions that could then be executed by the robot.
The source code for this lab can be found here.

## Lab 6: Pointers

In this lab we utilized pointers and swap functions to assign actions to the robot and then switch those actions.
The source code for this lab can be found here.

## HW 3: Urban Search and Rescue (Training)

1.) Our team’s strategy is to each tackle a different quadrant of the room with our respective robots. For the allotted time we plan to take as many pictures as we can of our respective quadrants.
2.) The robot will run using user-inputted commands. It will utilize sensor data prior to executing the commands to make sure there are no obstacles in its projected path.
3.) Our robots will take pictures of their individual quadrants and we will analyze the images manually (for now) to determine the exact location of the lost scribblers in the mock disaster area.
4.) Each team member will have a unique method for analyzing their specific quadrants for the presence of lost scribblers. Hypothetically since the team member’s robots are in their own quadrant in the area, the pictures each takes should be unique and indicative of their quadrant.

## HW 4: Mars Rover

1.) To find an alien, my program checks each column of pixels for the presence of a pixel that has the green alien color. Once found, it starts another search, starting from the last column it left off with, to search for the next column that does not have a green pixel. It is at this point that it takes the location of the start and finish of the alien and uses it to determine that an alien has in fact been found.
2.) The Alien class I made has data members for the start and end values of the x and y coordinates of the alien, as well as values for its height, width and area. The class had member functions that found the x,y start and end coordinates, functions that calculated the alien's height, width and area and it also had a function to change all of the picture of the found alien to black so that the next time the search algorithm ran it would not pick up he same alien again.
3.) The object detection algorithm first found where the alien started (by testing for the presence of a green pixel in the column) and it figured out where it ended by finding the next successive column after that that had no green pixels at all. Once this column devoid of green pixels was found it meant that an alien was found and a function was run which "disguised" all of the green pixels of the found alien to black. That way, when the search algorithm ran again it would pick up a different alien and not the one it had just found.
4.) I started testing my picture for the picture of only one alien to see if it could successfully locate the start and end of one alien. This image was the simplest to start with because there was only one alien in the middle of the image and all of the picture's green pixels were clustered in one place.
5.) To sort the aliens by size I utilized the calculated pixel area of each alien. This area was determined by subtracting the end y values from the starting y values and the end x values from the starting x values of each alien. Once a height and width were determined this way, they were then multiplied together to get an area, which was then stored as a data member of the alien object.
6.) The proximity of the aliens to the rover was determined by their largest y values. Those with the highest y end value were closer to the bottom of the picture and thus closer to the rover that was taking the picture.
7.) The Big O of my sorting algorithms was n squared.

## Final Project: Human-Computer Interaction

1.) The CS topic my team is demonstrating is human interaction with computers.
2.) My program engages the user by giving it options of things to watch the robot draw. It then takes this interaction further and prompts the user to try drawing with the robot themselves.
3.) I was originally going to use a webpage as my user interface but, time wouldn't allow for it. I made my interface the Ubuntu terminal; it shows a menu of options for the user to choose from and then uses computer keyboard input to call different functions within my program.
4.) I evaluate my human-computer interface by tracking the amount of times each functions is called (either watching the robot draw, or drawing with the robot manually.)
5.) I evaluate the user's interaction by tracking the number of times the user calls a function and by tracking the average amount of time the user spends with each function.
6.) My program collects user information by counting the options that they choose within each menu, it also uses the computer's time to track the duration for which each function is being utilized.
7.) My evaluation report is generated by acquiring the number of times each functions is called (which comes from a text file that is updated after each user.) It also reads out the average time that each function is being interacted with (also acquired from a text file.)

The University of Southern California does not screen or control the content on this website and thus does not guarantee the accuracy, integrity, or quality of such content. All content on this website is provided by and is the sole responsibility of the person from which such content originated, and such content does not necessarily reflect the opinions of the University administration or the Board of Trustees
My CS101 Robot Webpage