Your Turn!
Now it’s your turn. Download this document for a recap of the blocks we covered in this lesson and some hands-on exercises for you to explore.
What You Have Learned in This Lesson
In this lesson you learned how to build and apply the "$1 Unistroke Recognizer", an algorithm that lets you recognize single-stroke gestures, i.e. gestures that can be drawn in one line without lifting the pen. To recognize a gesture, it uses an instance-based nearest neighbor algorithm comparing the gesture with a list of examples, reporting the example for which the sum of the Euclidian distances with the gesture is the lowest.

$1 was developed for rapid prototyping of gesture-based user interfaces and originally published in 2007 by researchers from the University of Washington and at Microsoft Research.
Read the original publication here:
https://faculty.washington.edu/wobbrock/pubs/uist-07.01.pdf
Find a version to play with here:
http://depts.washington.edu/acelab/proj/dollar/index.html
Or take a look at our teaching resources for the $1 gesture recognizer:
https://d.dam.sap.com/a/tjGdJ9K?rc=10(downloads a zip file with the PDF versions of all worksheets)
Normalizing Data
Drawing the same gesture several times – in this case a heart – will result in wildly different sketch variables. Based on how fast you draw the gesture, the length of the sketch variable can vary by several hundred points.

To find the closest gesture in the examples list, you are comparing the gestures point by point. Therefore, you need to make sure that they all have the same number of points in their path. This is a form of normalization – a process often used in data science to make different data comparable.
We achieved this with the resample block that normalizes the gesture in the first input slot to the number of points in the second input slot and distributes those points evenly across the whole path.

The original $1 Unistroke Recognizer uses not only resample but several more normalizing functions – we found that resample is the most important one though.
The translation function makes sure that all gestures have the same center. The centroid of each path is moved to the same position when normalizing the gesture. That ensures that paths which are drawn at different positions of the stage are still recognized as the same gesture.
The rotation function calculates an indicative angle – the angle between the centroid point and the first point in the path and turns the path by that indicative angle.
The scale function makes sure that all gestures have a similar size by calculating the bottom left and top right point of the gesture’s bounding box and scaling all points up to the same size. This makes sure that a very small heart and a very large heart are still recognized as the same gesture.
You can find a version of the $1 Unistroke Recognizer that contains all normalizing functions in the exercises.
Bias in Data Science and AI
The term "bias" in data science or artificial intelligence describes the phenomenon when systematic errors in data collection, data processing, model training or the interpretation of results lead to inaccurate and, in some cases, discriminatory behavior.
In this lesson’s example, Jens couldn’t use the gesture recognizer that Jadga built, because she only trained it on her own versions of the heart gesture. Jens draws the heart shape in a different way, and since his variation wasn’t included in the training data, the system failed to recognize it correctly.
Bias is often happening due to an underrepresentation of certain groups in the training data. As a result, the models don’t learn to handle them properly.
But even if the data is diverse and representative, it is not always objective or true. Data encodes societal issues that aren’t based on causation but rather correlation.
Finally, stereotypes and prejudices in the developers of AI systems can be reinforced through those systems.
Tackling the problem of bias in AI systems therefore requires multifaceted strategies. From more representative training data to diverse teams working on AI systems, from thorough testing with a diverse user group to specific bias monitoring.
If you’re working with or building an AI system, make sure you never lose human oversight and keep in mind that AI systems and the data they are trained on aren’t always right.