So now the next step for this week was to start developing the first prototype of the ‘Scope’ App. We called this Operation Robot and Aravindh (his blog is http://thefieryeye.wordpress.com/) and I decided to compartmentalize the tasks, so as to prevent too many syncing errors and merge issues. My task was to get the camera working within the app, take a picture and OCR it, keeping with the Android design principles. It is an immediate upgrade from the Simple app created in the last week, except that now we have to focus on actually making it the final ‘Scope’ app to be demo-ed at CA1 in a few weeks.
Creating the camera, I tapped into the Camera capabilities of an already existing Android package, MediaStore.ACTION_IMAGE_CAPTURE and used this as an intent to start the Camera. After capturing the image I put it in the test in the Tesseract engine and generated the result as a toast.
The harder part of this was understanding the Android Action Bar (http://developer.android.com/guide/topics/ui/actionbar.html) and Fragments (http://developer.android.com/guide/components/fragments.html). The reason for this was to keep the number of changing views in the app minimal, and easy to work with. In the end I figured out the different fragments and the way of using it. From that point it was easy to get the camera working! Good end to sprint 1!