Thinkie 3 for thinking aloud

Working with Thinkie 3



Thinking aloud

For applying the app, it is good to understand the thinking-aloud methodology. Some start references:

Thinkie basics

Thinkie supports data capture for thinking aloud with people doing a task at their normal place: the iPhone for the speech, the iPad for video and text/speech handling.

Thinking aloud displays what a person is thinking while performing a task such as inspecting a device or a user interface, or interpreting a poem or an election poster. Usability engineering is a main application area for thinking aloud.

Thinking aloud protocols are taken in the familiar environment of the test subject, where everything needed is at hand. The researcher will move there, bringing own location-independent mobile equipment.

Thinkie is open for many languages. The speech recognition expects the system language of the device. You change it if needed.

Often it is useful to have a video aside the speech recording on the phone. It shows the environment and the test person's interaction with it. Such a video can be taken on the iPad.

Thinkie supports the manual transcription and the speech recognition for obtaining written transcripts of the spoken verbal protocol. This text is interpreted and discussed, often by both test person and researcher together.

All files are stored on iCloud in the researcher's private data container. They can be exported for use outside Thinkie.

How to use Thinkie

Most Thinkie use is easy. But especially for handling the speech recognition, an explicit explanation is necessary. You can choose between two storybook-like instructions in English and German.

For the English explanation see ThinkingAloud for Thinkie 3

The German version is at LautesDenken mit Thinkie 3

Privacy handling is described in the Privacy declaration of NoApps

Thinkie history

The Thinkie app has quite a history of its own. Its prehistory goes back to the late last century. At that time, I was doing thinking aloud in Germany and the US in order to find out which cognitive moves happen in the mind of a summarizer.

Before I had discovered in my own summarizing performance stable cognitive actions. Most probably other summarizers would share some of them and possess others, possibly better ones.

My equipment included a laptop, a cassette recorder, and a portable printer. The institutes that hosted me helped with a dictaphone so that I could transcribe the verbal protocols of the summarizers. They ended in filers with many pages pasted with detail slips describing the thinking steps done by agents executing mental operations like reformulation, generalization, skipping, and so on.

This was published and small-scale implemented. Besides this there arrived new options for mobile equipment that researchers might use during field work, something that might fit in a lady's handbag.

A first implementation referring to my experience of thinking aloud was Thinkie 1.0 of 2013.

Thinkie 1 of 2013

The first Thinkie was essentially a feasibility study. You find a description in Thinkie: Lautes Denken mit Spracherkennung.

Thinkie 1.0 came on iPhone and iPad, with iCloud storage, supporting data capture of audio and video. Initial transcribing and reworking of the verbal protocols included Siri dictation.

All concepts of Thinkie 1.0 survived till today, but the implementation adapted to newer IOS tools. On its way, Thinkie made it towards better usability.

Thinkie 2 of 2014/15

Thinkie 2.0 advanced towards real use. The user interface was greatly improved, and the app was reconstructed on the current IOS. Code now in Swift instead of C.

Thinkie 2.0 remained a research and development option. Anyhow it was downloaded 3.700 times with tops in spring and summer 2017 - remarkably late after the first downloads in April 2015.


Thinkie 3 of 2020/21



The App Store removed the 2 version by end of 2018. This was ok and the start signal for the next reworking of Thinkie. The tendency remained all the same: keep up with the current IOS state, ameliorate the user experience, approach real usability.

One might expect that producing the third version of an app is a routine affair. Oh no, on the contrary for reworking Thinkie.

Most time was spent on trying to find technical solutions that would be convenient for users. Many tries remained tries. There was perhaps a design that would fit for use, but alas, not implementable with my means. Or on the contrary, envelopping an attractive technical structure so that it might appear natural on the user interface caused a redesign of the interface, and often enough I had to dump and rewrite considerable passages of code.

Coming up wit an operational speech recognition was a main endeavor. By now, the spoken verbal protocol can be taken non-stop without limits of duration. Users submit it to recognition piece by piece, in meaningful items as far as possible.

Thinkie recognition adapts to the main language of the device. In plus, it is accompanied by a mechanical segmentation. It cuts the audiofile into sequences of a given length for manual transcription. This helps to organize transcription when speech recognition is not applicable or to complement it for checking and correction.

Next, Thinkie results must move to other tools. So I entered exports of all data files.

Thinkie data must be available from anywhere on the web. All data was put into the private container of the device owner, and put there into one common record structure.

Handling Thinkie appeared more demanding than before, requiring a more extended user instruction. Too much to put it into the app itself. It was placed on my web page.

The user interface was set up from the scratch. As I tested it in a user role, I found and corrected several bugs. User complaints are welcome to further improve the app and its user experience!