# Lab 1: Drawing Stuff

In this lab the group effectively drew what we are calling a figure eight. Though it isn't completely a figure eight, we found through testing that we preferred this shape. The shape was drawn by the Scribbler robot with the following code:

#include <Myro.h>

#include <math.h>

#include <stdio.h>

#include <iostream>

using namespace std;

int main()

{

}

# Lab 1: part deux

• My robot, Tommy tRobot, returned values of 1 on the body IR sensors except when there was an object an inch away, when it returned a value of 0.
• My fluke IR sensors returned 0 when there were no objects in front of them, and increased as objects got closer, to a maximum of 6400 when objects were directly in front of it.
• My light sensors returned values between 65000-65200 when open, and 65200-65400 when covered
• My battery sensors returned a value of 7.57818
Joseph Boman

• Joe's robot, TrashBot, has IR sensors that returned 0 when no objects were near and 1 when objects were between 2 and 3 inches
• His robot's fluke's IR sensors returned values of 0 when no objects were near and increased to 6400 when they got within 3 inches
• His robot's light sensors returned values between 64750-64900 when open and around 65400 when covered.
• His robot's battery sensor returned a value of 7.29221
• Her robot, Puca, returned values of 1 on the body IR sensors when there was no object in front and 0 when there was an object 2 inches away
• Her fluke's IR sensors returned values of 0 when no objects were near, and increased to 6400 as objects got closer.
• Her light sensors returned values between 64700-65000 when not covered, and 65000-65200 when covered
• Her battery sensor returned a value of 7.29221
• His robot, Camo, returned values of 1 on the IR sensors when no objects were nearby and 0 when objects were 2 inches or closer
• His fluke's IR sensors returned values of 0 when no objects were near and increased to 6400 when objects got within 3-4 inches
• His light sensors returned values between 64500-65000 when open, and upwards of 65000 when covered
• His battery sensor returned a value of 7.38753

# Lab 2

In this program, Tommy tRobot can take two courses of action, depending on his current environment.

If he starts out on a dark surface, the robot will in a sense "roomba" around and avoid obstacles until the center light sensor receives a value outside of a user-inputted sensitivity of a calibrated starting light sensor average.

If the robot starts out on a light surface, the robot will spin around while playing the song Happy Birthday.

After the above actions take place, the robot will spin around until both IR sensors sense something. Tommy tRobot then disconnects.

My code is located here

# Lab 3: Fibonacci Lab

Done with Andrew Yocca, Jon Goodheart, Dallas Heyden

We created the above fibonacci spiral with two main components when it comes to problem breakdown: (1) deriving the nth fibonacci number and (2) drawing a quarter of a circle of any radius. The first was achieved recursively because the next number in the sequence is naturally dependent on the former two. The second was achieved through a game of guess and check, and then deriving a formula for the speed of the wheels (utilized in the robot.motors(left, right) command. The code is linked at this link

# Homework 1: Robots Got Talent

This robot is based on the talent of Tommy tRobot, and in this project, he performs a song, draws a picture, and has a little surprise in store, if the battery is charged enough, of course.

## Pre-Lab

1. Tommy tRobot's talent is going to be Olympic themed, and so he will be performing the Olympic theme song. Since some pieces of the short selection he will be performing are repetitive, for loops will be used to save time and energy. All of the notes will be hard-coded, however. His singing will be triggered by sensing an obstacle in front of the center fluke IR sensor.
2. Since the theme is the Olympics, Tommy tRobot will be drawing the Olympic rings. Since no part of this is necessarily repetitive, looping isn't really helpful. The only way to really do this is through hard coding. To save some time, most of the wait times for the motors commands have been implemented with a variable name, rather than a number so that it is easier to debug how long it takes to draw a circle, for example. The drawing ability is triggered by a simultaneous reading of the IR sensors (not fluke sensors) as seeing something.
3. Tommy tRobot has the surprise ability of following a branching line in a random manner. It is random because he picks left/right randomly through a random digit algorithm. Simply, if the random number is a 1, he turns one way, a -1 it will turn the other way. (These two digits are the only digits that can be generated through the area. Essentially, the robot moves at a slow pace if both light sensors read a dark surface. When the robot encounters a light surface, it will move to the left/right randomly until it reaches a dark surface. If it doesn't do so in 1 second, then it turns back to the other side. It is started and ended by light in the center light sensor.
4. The algorithm for the overall layout of the program is to create separate functions for the drawing, the singing, and the surprise, and then implement helper methods if necessary or convenient. As far as the transitions are concerned, the robot moves from the drawing to the surprise to the singing seamlessly and all it takes is the trigger for the next activity to end the former activity. Manual activity is required to remove the pen from the robot at the end of the drawing (or else it will draw when we don't want it to).

Here's the code

# Lab 4: Braitenberg Vehicles

This lab was performed with Zach Zeff, Jerry Webb, and Kristen McNeal

It is of note that the book this class uses uilizes the old Scribbler, which presumably used the Light Sensor end as the front. Our Scribblers use the Fluke-end as the front. So, as a result, all of the commands sent to motor should be negative and opposite (in terms of left and right). Here is my code

# Homework 2: The Robot Games

This lab was performed with Joseph Boman, Cherys Fair, and Mathew Schacher

## Pre-Lab

1. The opening ceremony requires only that the robot be able to be controlled by the user and be able to play the fight song. The latter half is as simple as hardcoding the USC Fight Song and encapsulating it in a method for ease of use. The main method will take user input for a character, which is used in a switch. 'a' can mean turn left (some constant amount of time), 'd' can mean turn right (the same amount of time), 'd' can mean go backward (the same amount of time) and 'd' will mean go forward (for the same amount of time). All of these specific values are arbitrary, but a last value, like 'k' for instance should be used to play the fight song and exit the switch for when the robot is in position on its respective letter.
2. The line following behavior will be encased in a while loop, and will treat the back of the robot as the front. While both line sensors read darkness, go forward (which is robot.motors(-1,-1)). If the left sensor (treating the back as the front) reads white, rotate right until both light sensors are dark again. If the right sensor reads white, rotate left until both light sensors are dark again. If both sensors read white, then go backward for a short period of time and rotate a bit so that it can attempt the procedure again.
3. The maze solving ability should be pretty straight forward and will involve the FlukeIR sensor. Encased in a while loop, the robot will move forward until it detects something in front of it, some testing should be done to make sure what value is required so that the robot sits perfectly in the middle of the intersection, but it should look left and look right, and record which way is open (which is unobstructed). It should turn in that direction and then return to the main loop. It will be killed (exit the loop) when the middle light sensor sees light.
4. The fastest drawing technique should require a hard-coded attempt at drawing the required figure in the easiest way, meaning the smallest turn and the least amount of times when the robot has to trace over lines it has already drawn. It will not require any sensors. It is important to know how many degrees the robot can rotate per second and how far it can travel in inches per second to be sure that the measurements of the drawing are correct. It will be triggered by one of the IR sensors.
5. The control structure of the program will not be unlike the menu program used to control the robot by command in Question 1. The control structure will involve user input which is funneled into a switch to branch the code. The drawing code will only activate after it has been selected and thenone of the IR sensors senses an obstacle. Naturally the code only works if the battery is sufficiently high.

Here is our code. and here is the zip file.

# Homework 3: Urban Search and Rescue Robot Training Exercise

This homework was performed with Andrew Yocca, Dallas Heyden, and Kushaan Kumar

## Pre-Lab

1. In order to effectively locate the robots, we will split up the task of surveying the area. Each member of our team will tackle a different quarter of the larger square region, divided into four smaller square regions. Though ideally we would like to start in these corners initially, if we all have to enter the Rescue area at one location, the person who has to go the furthest will enter first, followed by the people who have to go less far. Each robot will be responsible for using sensors to categorize and map their quarter, so that way together we can map the whole area. The biggest sensor of use will be the camera, which will allow the robots to visualize the area, identify robots, and move towards them if necessary. The obstacle sensors may also be useful because it will make it easy to tell if there is something is in front of us without taking a picture, which can waste time.
2. The robot will navigate through the disaster area with user input. Using the pictures gathered and showed to the user during the simulation, along with the aforementioned obstacle sensore data, the user should be able to direct the robot around to take pictures of the entire area, thereby allowing the user to locate all lost scribblers.
3. Every picture the robot takes will be stored into two arrays, one of raw pictures, the second of edited pictures. Once the 10 minute simulation is complete, the user will view each picture and edit those that have scribblers in it to draw a green box around those lost scribblers in the picture itself. Also keeping track of which indices contain edited pictures, a slideshow of found Scribblers and overall pictures can be shown at the conclusion of the second 10 minute period.
4. The Team Mapping portion of this challenge will hardly be robot-based at all outside of the fact that they will provide the pictures. By comparing pictures from similar quadrants and knowing where we are in relation to the boundaries of the region, the team hopes to be able to plot on paper the locations of the lost scribblers identified in sub-problem 3.

Here is my code. and here is the zip file.

# Homework 4: Mars Rover

## Pre-Lab

1. In order to find the aliens in the Martian pictures, I will search the image recursively for each individual alien. In order to do this, I will find the first alien, color it a different color, and then research the same image. Since the processing is based on the color range that determines the green of the alien, it will skip over the already processed image because it will no longer fit in the color range.
2. The class Alien needs to hold information about the location of each alien and its size. So data members include an integer area because it is bounded by an integer pixel height and width. The bounds will be encapsulated by a struct that holds the coordinates for the top-left and bottom-right points. Member functions will at the very least include a getArea() function that allows me to sort the Aliens by area and also a getBounds() function that allows me to sort by location. At this time I consider it fairly unnecessary to allow mutator methods because the data within an Alien shouldn't need to be changed once initialized, but I might have to make changes depending on how the programming goes.
3. The process for developing the object recognition algorithm was first recognizing that finding multiple aliens is a recursive problem for finding a single alien. If I can repeatedly find aliens until there are none left, then I will have recognized all of the aliens in the picture. This definition as a recursive problem laid the foundation for the algorithm that would process the Martian images.
4. The sorting algorithm used to sort the Aliens by size is a type of selection sort that swaps each element in the array with the maximum of the elements that follow. Comparisons are made using the getArea() member function of the Alien class.
5. The sorting algorithm used to sort the Aliens by location is a type of selection sort that swaps each element in the array with the maximum of the elements that follow. Comparisons are made using the getBounds().botY value, which refers to the y-value of the bottom point in the alien. Aliens with greater of these y-values will be "closer" to the screen.
6. The Big O of my selection sort algorithm is O(n^2).

Here is my code.

# Final Project: Control Structures -- For Loops

## Pre-Lab

1. The overarching topic for the group is control structures, which are critical to the foundations of computer science, and thus the development of data structures and algorithms, which ultimately help to define the field. The topic for my specific demo is for loops, which are essential to parse through lists of data. Since lists of data are so important to computer science overall, for loops are equally as important.
2. The guidelines for this project denote that we craft a C++ program with some useful computer interface for users to interact with. Unfortunately, we don't have a solid enough foundation to make anything particularly interesting. So, I've divided this project into two parts: (1) the development of an Android application with a user-friendly experience that would be enjoyable for middle school kids, which uses all elements required of the C++ application (with the exception of pointers) and (2) the development of a C++ application to analyze the output data from the Android Application, which in itself will use a command-line user interface and fulfill all the requirements of the project. In a sense, this project will include two sub-projects. The Android Application will engage the user with touch controls that allow the user to select a picture, process it by picking colors from the picture, and answer a short survey that evaluates the app as a whole. The C++ program will engage the user with command-line prompts that ask the user to input data from the survey information, and it will print out useful statistical information to the user for evaluation. In order for it to be even more user-friendly, I'm using a third-party Mac OS Application called FunBooth to take pictures of the kids so they can edit those pictures on my android device. The transport will require a Wi-Fi connection, because I'll be using my device as an FTP server.
3. The process for developing the Android interface is done in an XML fashion both in code and graphically. This process is greatly supplemented by the Eclipse IDE. The command-line prompts for the C++ program are very straight forward to craft, as they do come from the std namespace.
4. Evaluation of interface is going to come largely from the survey included in the Android Application - which includes fields which rate the quality of the app and additional comments. The C++ application can provide important statistical information to make conclusions about those ratings.
5. Evaluation of the user-experience is going to come largely from the survey included in the Android Application - which includes ratings about the overall understanding of for loops as well as additional comments. The C++ application can provide important statistical information to make conclusions about those ratings.
6. One activity included within the Android Application includes a survey section, which takes almost no time to complete because of the simplicity of the interface. Responses to this aspect of the program will be instrumental in collecting user interaction information.
7. Based on the results of the C++ application data output, I will generate a statistical analysis of how users viewed my application and how well they understood the concept of for loops with regard to control structures. This evaluation will be formulaic in nature, but ultimately I'll be able to tell.

## Application Screenshots

Image Selection Screen

Unedited Picture

Edited Picture

Survey Example

Text File of example responses and analysis

Here is my code.