P.S. Lab Notes are written for and organized by Persona Types👤 persona types – we wanted to sort our content by the way people think and not across topics, because most topics are beneficial for people of all different backgrounds in product building. Our method allows for readers to hone in on what suits them, e.g. via perspective, lessons learned, or tangible actions to take. .
I consume a lot of content, especially in preparation for our Pipette issues, and the process of learning itself has been on my mind recently. Specifically, how we prefer to learn, the types of content we consume, and some content transformations – ways to transform that content into different formats.
Before writing this article, I would have described my learning style as a visual learner. I’m constantly drawn to charts and diagrams when learning a new concept, and if they’re not available, I tend to make them myself. But I wanted to check for sure (for science?!), so I took an online quiz. It scores learning preference (not aptitude!) in 4 categories:
Have a minute to spare? Take the quiz from VARK Learn and check out their page on how to understand the results. Some great bits about active learning in there, along with other exploration points. Once you’re done, let us know what you got via our feedback form!
I was pretty surprised to see that I scored as a “multimodal” learner (2+ learning styles), with a high preference for everything but auditory (aural). But according to the research from VARK, over 65% of learners prefer 2+ modes.
On the visual side, I tend to recall concepts especially well if there is a graph, diagram, chart, or map associated with it. Something about the spatial aspect of them tends to really burn the idea into my brain, and I can remember them years later.
The human relationship to spatial learning is pretty interesting too – in Joshua Foer’s Moonwalking with Einstein, he explores a technique called “memory palaces”, where people will construct complex mental spaces to store incredible amounts of information. Furthermore, he points to a study of London taxi drivers proving that geospatial knowledge can physically change our brain’s structure. The hippocampus (responsible for spatial navigation) was 7% larger in cabbies compared to the norm. This difference is attributed to a test they must pass called The Knowledge, which covers 25,000 streets and their points of interest.
You can tell that I get pretty amped about visual content. On the flip side, listening is probably the way I absorb the least (hence my low score there). But despite my learning preference, listening has some specific advantages compared to other learning modes. For example, the ability to be hands-free and do chores, or go on a walk while I learn.
What about the rest of the learning styles (and associated content types)? Let’s look at their properties and some of the benefits and constraints they have.
As we covered, each content type has unique properties and could be less suited to you in the moment or in general. For example:
Can you think of other situations where you’ve wanted material in a different format? Let us know using the feedback form!
For example, you may find yourself interacting with material that is not in your preferred learning style. However, you can use the web to search for it in a different format. Regardless of your preferences, it may also be interesting to take something you already have and look at it differently. Enter content transformations.
You may already use some common content transformations such as finding audiobooks in place of books and reading transcripts in place of podcasts and videos (YouTube videos have searchable transcripts).
However, there are more techniques that I have found useful and less common, like leveraging accessibility tools, AI, and other apps. Let’s explore some.
Less obvious transformations:
Using AI:
Let’s expand on some of these a bit more.
Depending on the platform you use, instructions will vary slightly. But the gist is that you can take any image with text in it and use an app or website to extract its text. Under the hood, OCR (optical character recognition) is used.
If you want to use a website, OnlineOCR.net is spartan but gets the job done. I’ve used it plenty of times over the years. Now there are more native integrations that you can use without having to open a browser.
For example, on Macs, you can use Preview to select text from an image.
Prefer to use your phone? Here are the instructions for native phone apps: Google Lens (Android), and Live Text (iOS).
I’ll show how I do this on Android and provide some instructions for other platforms too.
For iOS, see these instructions.
Enjoyed the article? It’s part of our Lab Notes – a compilation of long-term learnings and emerging thoughts from our journey in the tech industry. Learn more or check out some additional lab notes below.
It includes our latest articles and thoughts. Sent roughly 1x a month.