Cover photo

Building a Learning Lens: My Experience at ASU + GSV and the Future of Learning

Weird, but effective: How my AI-enhanced habits are helping me learn better in the real world

Bethany Crystal

Bethany Crystal

For the past three days, I've been attending The AI Show at ASU + GSV. It's my first time attending this major conference since I've been completely "AI-pilled," so to speak.

I noticed a few...unusual...patterns to my previous conference attending behavior. Specifically:

Weird Ways I Used AI at ASU + GSV

Here's how I used AI to make sense of the conference and the world around me:

  1. Took photos of all booth displays, ran them through my new app, MuseKat for real-time voice summaries of what I was seeing

  2. Took a photo of someone's resume, ran it through a custom GPT trained on my company to figure out how we might scope a project together (with him standing right there)

  3. Responded to a text asking, "What takeaways did you have from today?" with a screenshot of Miko's takeaways from my new app MuseKat

  4. Used voice dictation as a second brain throughout the entire day, sharing snippets in bursts, then auto-summarizing all of my next steps in one go

  5. Met someone at a bar who asked what I did, I handed him my ChatGPT instance and asked my AI to introduce me to him on my behalf

  6. Asked about who is building credentialing systems to assess the skill level of AI agents (vs. human credentials)

  7. Used AI to prioritize all conference takeaways, to-do lists, and even write the email follow-ups for me

Since I started building MuseKat, which is an app to drive curiosity-based learning about the world around you through customized audio stories, I've noticed that my own habits around using AI to help me make sense of the real world have jumped up substantially.

Where I once would reach for a pen, I reach for an AI note-taker. Where I used to keep a business card, I take a photo (then run it through the AI). Where I used to stow away in the corner and feverishly write notes from a meeting, I talk to my AI.

While I initially built MuseKat to give parents a way to translate real-world content from museums into kid-relatable audio snippets, I found myself using it to contextualize my own, adult IRL learning too, including at the expo.

For instance, rather than just take notes about each booth that I visited, I took a photo with MuseKat and listened to an audio summary. Then I summarized my visit with these highlights.

post image
I built MuseKat to help parents help their kids make sense of the world through AI-powered audio stories. It's been fun to see that it's useful for grown-ups, too.

How AI Offers a New Learning Lens

post image
Me and Miko enjoying the ASU + GSV conference together (image generated via ChatGPT)

My use of AI isn't just happening behind my computer screen anymore. It's happening through my phone. Through my earbuds. And really, through any piece of technology that allows it.

In fact, the hardware behind the way I'm using AI isn't quite keeping up with my desire to be AI-first. Which makes me think there will soon be an emergent wave of new hardware that's AI-first. Stuff that's not just the phone. Stuff that's screen-light or even screen-free.

At the conference, I tried on the Meta VR headset and the AI-enabled Meta Ray Ban sunglasses for the first time. The glasses are very cool. You can take photos of the world around you, ask questions, and even listen to music without wearing buds in your ears, which means you are still more cognitively present to the world around you than say, when wearing noise-canceling headphones.

(All of that for a $300 price point seems like a pretty reasonable price to bring the world of knowledge to your brain, in real time.)

What I like about the Ran Ban glasses is that they are essentially adding a filtered view into the way you make sense of the world, but it's happening in real time. This is the same concept behind MuseKat. With MuseKat, I'm starting with a kid-friendly learning lens, giving families and caretakers the tools to help their kids make sense of the world around them, at their own pace and on their own level.

Right now, it might feel counterintuitive for people to use a device to reinterpret the world around them. But I believe this is a habit we're going to see more of in the near future.

My guess is that not a lot of parents would be willing to spend $300 for AI-enabled glasses for a five-year-old who will likely break them anyway. So I'm hoping a good old-fashioned smartphone app is a good enough start. At least for this week...

Beta Testers Wanted! If you'd like early access to my new iPhone app, MuseKat (which is now in TestFlight), I'd love to get your feedback. You can sign up for the waitlist here.

Arweave TX

GhnoFtzSPeq76cm5HBxZfzDGMYf4BzSc6m_OlJr_jyQ

Bethany CrystalFarcaster
Bethany Crystal
Commented 1 month ago

gm, I’m at my first big conference since I've been "AI-pilled" and I've found myself using AI in new ways, like generating real-time summaries with my app MuseKat, scanning resumes through custom GPTs, sending follow-ups with AI, and more. In today's post, more on how using AI is changing the way I learn: https://hardmodefirst.xyz/building-a-learning-lens-my-experience-at-asu-gsv-and-the-future-of-learning

Building a Learning Lens: My Experience at ASU + GSV and the Future of Learning