Robotics Portfolio

17
Robotics Portfolio Raphael Chang 2014 FRC Robot CAD Model Render

Transcript of Robotics Portfolio

Page 1: Robotics Portfolio

Robotics Portfol io Raphael Chang

2014 FRC Robot CAD Model Render

Page 2: Robotics Portfolio

2014 Hexagonal Drive Base and Bumpers

During the 2014 FRC season, our robot’s biggest weakness was its susceptibility to

being “T-boned”, or being pinned from the side by another robot by friction between the

bumpers. This effect can be seen here.

I decided to pursue a solution to this problem. There are two factors that could reduce

the friction preventing the robot’s movement, the angle of effect of the frictional force,

and the coefficient of friction between the bumpers.

Frame Geometry

By changing the angle of the sides of the frame, there are three factors that improve the

situation. The friction force parallel and opposing the traction force would be reduced,

because the friction is no longer parallel to the traction. The normal force between the

bumpers is also reduced, because there is a component of the traction pulling away

from the contact by the other robot. Some of the pushing force from the pinning robot

would also add to the forward force of our robot, because the robot is no longer pushing

perpendicularly.

Because this project was done in the offseason, I chose to modify our current drive

base rather than design a new one. Using Autodesk Inventor CAD, I found how much

angle I could add to our current frame geometry without exceeding the frame perimeter

limit of 112 inches.

Ftraction1 Ftraction2 Ffriction

Ftraction1

Ffriction

Ftraction2

Regular drive base Hexagonal drive base

Page 3: Robotics Portfolio

The maximum angle turned out to be around 11.7 degrees, which gave our robot 104%

of its original traction force. This meant that any robot T-boning us would not only have

no effect, but the other robot would actually be helping us drive away!

To manufacture the bumpers, which were made of wood, I used a box joint to join two

pieces of wood together at an angle. I used Autodesk Inventor to model the bumper

wood before manufacturing to get the correct angles for the cuts.

Low-friction Fabric

Another way to reduce friction was by changing the

material of the fabric covering our bumpers. Doing some

research, I found that sail cloth had a very low

coefficient of friction. Performing a static friction test

comparing the original material with the new sail cloth, I

found that the coefficient of friction with the sail cloth

was 3 times lower than the original. I had our new

bumpers covered in this sail cloth, and the results were

impressive. Here is a video showing the final result. The completed angled bumpers

Page 4: Robotics Portfolio

Ball Manipulation Software The 2014 FRC game involved passing a game ball to other robots on our alliance.

Control of the ball during matches was also very important in this game. To assist our

drivers in handling the ball, I wrote code in C++ using three sensors, an IR proximity

sensor, a gear tooth sensor, and the drivetrain encoders.

Ball Collection and Detection

Using an IR proximity sensor behind the bumper, my code

automatically spun the rollers until a ball was detected to

have been collected over the bumper. Whenever the

proximity sensor detected that the ball had slipped out of

the robot, the rollers quickly spun the ball back into the

robot. This allowed us to always have control of the ball so

the driver could focus on controlling the rest of the robot.

Dribbling with Speed Control

In autonomous mode, we had to move two

balls at the same time, one inside the robot,

and one behind it on the ground. To

maintain control of the ball on the ground

without collecting into the robot, my code

read the drivetrain encoder values and set

the collector roller motor power so that the

surface speed of the rollers matched the ground speed of the robot.

Controlling Passing Speed Independently of Driving Speed

A gear tooth sensor and the drivetrain encoders are used to control the ground speed of

the ball when passing. Similar to dribbling, the drivetrain encoders were used to set the

roller motor power so that the robot ground velocity offset the roller velocity. For

increased accuracy, an additional gear tooth sensor on the rollers was used to detect

when the rollers were spinning at the desired speed by measuring the passing rate of

the gear teeth. The collector arm was then lifted, leaving the ball at the desired ground

speed independent of driving speed. This was especially spectacular when the robot

drops the ball still on the ground even while driving at full speed.

Page 5: Robotics Portfolio

Crank Geometry Our 2014 robot had to launch a 2 foot diameter game ball into a 7 foot high goal, and

we designed our robot to do so from 15 feet away. We used two 400 pound

compression springs to get us the required energy. To compress these springs, we

used a two-stage crank rotated by a worm gearbox.

The second (gray) and third (yellow) linkages are pulled in the first stage (1 and 2), until

the second linkage is caught by a peg (3). In the second stage, the rotation continues (4)

until the linkages are over center and the shooter releases (5). Here is a video of the

crank in action.

Page 6: Robotics Portfolio

I worked on calculating the

geometry of the linkages of the

crank. The geometry had to both

compress the springs the right

amount and also minimize the

torque on the gearbox by the

springs so that we could use a

lower reduction and reload faster.

Using CAD, I created a sketch of

the linkages that also calculated

the torque required at various

points to overcome the spring’s

force at that compression. The

torque increased as the springs

were compressed further.

The geometry sketch could be

dragged around to show the

torque at different points of the pull.

The first linkage's length is the diameter of the solid green circle. The second linkage is

attached to that green circle, and the third link is attached to that going up to the shooter

arm. The images above show the crank in its second stage.

I then realized that the peak

torque could be minimized if I

made the length of the first linkage

longer and the second linkage

shorter different while keeping the

total length the same. Because

torque was force times radius, the

larger radius and the smaller

spring force in the first stage and

the smaller radius and the larger

spring force in the second stage

would make the torques balance.

This modification resulted in a 100

lb-in torque reduction for the

gearbox, allowing us to reload 3

seconds faster.

Page 7: Robotics Portfolio

Automatic Ball Tracking As a side project, I worked on a vision program to allow the robot to automatically track

the game ball and drive towards it to pick it up. I used a Kinect, which could give me

depth images in addition to RGB images. This allowed me to easily find the location of

the ball in 3D space.

This is the overall process of the program:

1. Convert Kinect depth image to 3D point cloud.

Using the OpenNI library, I captured both the RGB and depth images. I then

used the library to convert the depth image into a 3D point cloud.

2. Use RANSAC to filter out ground plane.`

RANSAC (Random Sample Consensus) is an algorithm that finds a model that

contains the most inliers out of a set of points. It works by iteratively selecting a

random set of few points, fitting a model to them, calculating the number of points

that are outliers to that model, and then repeating and taking the model with the

minimum number of outliers. I used this algorithm to find the ground plane, which

was the most prominent plane in the point cloud. These points were removed

from the cloud, leaving only the objects.

Page 8: Robotics Portfolio

3. Isolate objects using 3D blob detection.

Using a blob detection algorithm that works in 3 dimensions, blobs of points were

separated into distinct objects. One of these objects would be the ball.

4. Take blobs with highest percentage of blue and red pixels as blue and red

ball, respectively.

Matching the points in the point cloud with their respective pixels from the RGB

image, the blobs with the highest percentage of blue and red pixels were taken

as the blue and red balls, respectively, because the balls were solid red and blue.

5. Find average coordinate of blobs.

The average coordinate of all the points in each blob was used as the location of

each ball.

6. Convert coordinate from camera space to robot space using ground plane.

The coordinates above were relative to the camera. I used matrices to convert

the points from camera space to robot space, using the projection of the camera

onto the ground plane as the new origin.

7. Drive towards ball location.

The drivetrain position control loop used the ball location as the desired setpoint,

and actively corrected itself every cycle to move towards the current location of

the ball.

Page 9: Robotics Portfolio

Codebase Framework In 2013, our software had many automated routines, such as the climb and autonomous.

The codebase was badly structured for these routines, causing the code for the routines

to be complicated and unreadable, and thus hard to maintain. Addressing this problem

in the 2014 C++ codebase, I created a framework that made automation structured and

easy to maintain.

The framework was event-driven, meaning that each automation routine would be

started and aborted by events, such as the pressing or releasing of a joystick button.

Each automation routine was a subclass of the Automation class, and each event was a

subclass of the Event class. During construction, automation classes could be linked to

start and abort events, making it very easy to change how routines are started and

aborted.

The Automation class had three main functions, StartAutomation, AbortAutomation, and

Update. StartAutomation was called the cycle the start event fired, and AbortAutomation

was called the cycle the abort event fired. The Update method did the main processing

of the routine, including control of parts of the robot. This structure made adding and

modifying automation routines easy and understandable.

Subclasses of the Automation and Event classes in the 2014 codebase

Page 10: Robotics Portfolio

The entire framework was controlled by the Brain class, which this flowchart describes.

Page 11: Robotics Portfolio

2013 Automated Climb Sequence

The 2013 FRC game involved climbing up a pyramid, and we designed a robot that

could climb up two rungs to the second level. I wrote the complex software routine to

automate the sequence.

The robot had a winch pawl that could be powered by a single motor to reel the line in

and out, and it could also be powered by a power take-off (PTO) from the drivetrain to

get full power when lifting the weight of the robot. The PTO was engaged by a servo

motor. My code used a state machine that involved a gear tooth sensor and the

drivetrain encoders. The gear tooth sensor measured the distance travelled by the

winch pawl motor, and the drivetrain encoders measured the distance travelled by the

drivetrain. When the PTO was engaged, the winch pawl motor had to be run in sync

with the drivetrain.

Winch lines

Upper hooks

Arms

Lower hooks

CAD model of the 2013 robot with parts of the climber labeled. The lower

hooks were later redesigned, and the upper hooks were reversed.

Page 12: Robotics Portfolio

This is the overall sequence of the routine, using a state machine:

1. Pneumatically actuate the lower hooks up, and simultaneously reel the line out a

specific amount to prepare for later. Pause the routine to allow the driver to line

up to the first rung.

2. When a button is pressed, lower the lower hooks to pull the robot to the first level.

3. Reel out the line further to engage the upper hooks to the second rung.

4. When the driver ensures the hooks are engaged and presses a button, the lower

hooks release and the robot hangs by the upper hooks, ready to pull.

5. The PTO engages and the drivetrain and winch pawl motor reel the line in at the

same time, pulling the robot up to the second level.

With this automation, the entire routine can be completed in the last 20 seconds of a

match, with only 3 button presses by the driver. Here is a video of this routine in action.

Page 13: Robotics Portfolio

Shooter Loading Routine Our 2013 robot had to shoot discs into goals after picking them off the ground, so our

robot had a storage container to store four discs at a time. To load the discs from the

storage into the flywheel shooter, we had a pneumatic pusher with a wedge at the end.

To automatically load discs into the shooter, I wrote a state machine involving a

proximity sensor and the Hall Effect sensors on the shooter flywheel. When the wedge

moved back, the discs would jump up, so my code had to wait for the discs to fall back

down before moving the wedge forward and pushing one disc into the shooter. The

proximity sensor detected if the discs were settled and the Hall Effect sensors

measured the speed of the shooter.

This was the overall state machine:

1. Move the pusher back, and wait for the proximity sensor to detect that the discs

have jumped.

2. Wait for the discs to fall back down, triggering the proximity sensor.

3. Move the pusher forward, and wait for the shooter flywheel speed to drop,

indicating that the disc entered the shooter.

4. Repeat the process until all four discs in the storage are fired.

This auto-fire routine allowed our robot to fire all four discs in approximately 2.5 seconds.

Here is a detailed video showing this routine.

Page 14: Robotics Portfolio

Goal Detection In the 2013 FRC game, the goals had reflective tape around them to allow teams to

write vision code to detect the location of the goals. I wrote computer vision code in C++

to calculate the location of the goal and automatically align the robot to it.

The camera I used was a PS3Eye camera with the infrared filter

removed. I attached a ring of infrared LED lights around it to light

up the reflective targets. This ensured that the only image present

would be the reflection from the goal.

This is the overall process of the program.

To calculate the location of the goal in the image, I used the phase correlation algorithm.

Using a template image of the goal, the algorithm would find the location of the closest

match of that template on the input image. The algorithm ran the Fast Fourier Transform

on the template and input images and calculated the phase correlation between the

images. When the Inverse Fourier Transform was run on the phase correlated image,

the location of the brightest point was the location of the goal.

Final Result Inverse Fourier Transform

Input Image

Template Image

Camera

The image is acquired

from a PS3 Eye

camera with the IR

filter removed.

External Processor

The image is sent to a

Foxconn computer, and the

image is to find the location

of the target.

Robot Processor

The location of the goal is

sent to the main processor,

which is used to position the

robot to the correct location.

Page 15: Robotics Portfolio

2012 Key Detection

One of the features in the FRC 2012 game was a “key” that

robots could line up to for shooting basketballs. One of my first

software projects on the team was to write a vision program that

automatically lined the robot up to the edge of the key.

Overall Process for Positioning on the Key

Detecting the Key

Thresholded Image & Histogram

- The same process is also applied to the blue

channel so that we can detect both keys.

- The red and blue pixels that pass the

thresholds are added to a counter, which is

converted to a percentage of the entire image.

- The percentage is used to determine whether

the robot is on the edge of the key.

Input Image & Histogram

- The red channel peaks at the same value for

both white and red pixels, so thresholding red

values would not work.

- The values of the other channels are lower for

red pixels (compared to the same for white

pixels), so using a difference algorithm

(thresholding differences between the red and the

other channels) would only accept red pixels

above a certain threshold.

Red Pixels

Red Pixels

Red Pixels

Only

Camera

The image is acquired

from a webcam.

BeagleBoard

The image is sent to a processor

called the BeagleBoard, and the

image is processed (see below)

to get information about the key.

Robot Processor

The processed data is sent to the

main processor to control the

robot position to stop at the edge

of the key.

Original Image

Thresholded Image

Red Pixels

Red Pixels

White Pixels

White Pixels

Red & White Pixels

Page 16: Robotics Portfolio

Team Website As the webmaster of the team in my sophomore year, I decided to rewrite the team

website at http://lynbrookrobotics.com from scratch. Instead of using WordPress like it

previously did, I wrote a custom content management system that could also manage

our membership list and organize rides for team events.

I designed the theme for our website using HTML and CSS, and wrote the back-end

code using PHP and MySQL. I also designed many of the banner images on the home

page, and rewrote a lot of outdated content throughout the website.

Our new team website design and content was commended by many other teams.

Page 17: Robotics Portfolio

Addit ional Sources Papers I wrote for some of my projects:

Detecting the Key (2012) - http://lynbrookrobotics.com/resourcefiles/whitepages/2012/Key%20Detection.pdf

Lining Up to the Goal (2013) - http://lynbrookrobotics.com/resourcefiles/whitepages/2013/AutoAimDocument.pdf

Manipulating the Ball (2014) - http://lynbrookrobotics.com/resourcefiles/whitepages/2014/BallManipulation.pdf

Hexagonal Drive Base and Bumpers (2014) - http://lynbrookrobotics.com/resourcefiles/whitepages/2014/HexagonalDriveBase.pdf

Source code for some of my projects:

Automatic Ball Tracking (2014) - https://dl.dropboxusercontent.com/u/76375046/BallTracking.rar

Goal Detection (2013) - https://dl.dropboxusercontent.com/u/76375046/AutoAim.rar

Videos of my projects in action on our robots:

Hexagonal Drive Frame and Bumpers (2014) - http://youtu.be/eKcOGrCJX24

Automatic Ball Catch (2014) - http://youtu.be/R6odjjIuDBU

Dropping Ball Still on Ground (2014) - http://youtu.be/3vdnbtxrBGc

Automated Climb Sequence (2013) - http://youtu.be/23uxOb8Iwko

Autonomous Mode Routine (2013) - http://youtu.be/3615Vo_b2Nc

Shooter Loading Routine (2013) - http://youtu.be/o-IbhKo36-s

2011 Robot Arm Control Loop (2012) - http://youtu.be/w30u4442c20

Promotional videos I made for our robots:

2012 Season Highlights - http://youtu.be/XJYD4rgPWGU

2012 CalGames Offseason Competition Highlights - http://youtu.be/ws2bLb_TVbY

2013 Robot “Reveal” - http://youtu.be/jmdET56tukM

2013 Season Highlights - http://youtu.be/w79l4VhjrD0

2013 Offseason Competition Highlights - http://youtu.be/yCBjZ0s0kAo

2014 Robot “Reveal” - http://youtu.be/92IbHU0Z76I

2014 Season Highlights - http://youtu.be/6LP_wzg0UKI

2014 Offseason Competition Highlights - http://youtu.be/zd-kX49NzuM

Team Website (created in 2012): http://lynbrookrobotics.com