Playful Prototyping with Machine Learning

alt

The Henry Ford Museum of American Innovation wanted to give guests a peek into the sometimes messy, always interesting mind of a museum curator. We built them The Connections Table: a powerfully playful experience that juxtaposes surprising curator-created concepts with AI-driven discovery. Along the way, we learned a lot about the current state, and future promise, of AI.

Initial Promise

A theme that quickly emerged from our earliest workshops with the museum was the idea of interconnectedness. Chief Curator Mark Greuther gave us a tour where he connected a lathe from the Industrial Revolution, a trowel, and smartphone. That type of connection, especially among seemingly disparate objects, is one of the things that is really special about The Henry Ford. Our team set out to take visitors on a similar journey through the museum’s robust digital collection. Our team proposed designing a digital table, where multiple users could interact simultaneously, to make this proposed journey collaborative.

An early concept rendering with our proposed table

An early concept rendering with our proposed table

We had a provocation, a design hypothesis, and some initial spatial parameters. With these we also had a challenge: what happens when you scale from three connected objects to thirty thousand? Would it even be possible to make that jump? To try to answer those questions, we turned to AI.

Playing with AI

Early in our design process we love to use prototypes to play and chase ideas. Our first AI experiments were what we call “example mashups”: quick remixes of existing code samples and in-house experiments with project-specific content and context. Already, we found them really promising. AI-based image sorting made beautiful and shockingly logical visual connections. Once we made that data browsable, we found ourselves spending quite a lot of time cruising around what we created.

Our first AI experiment for The Henry Ford, using AI to classify images and T-SNE to cluster by visual similarity

Our first AI experiment for The Henry Ford, using AI to classify images and T-SNE to cluster by visual similarity

alt

An interactive T-SNE visualization using openFrameworks

We challenged ourselves to expand our use of AI, and explore new, creative avenues. Spoiler: all we found was a dead end.

The idiot curator

Bringing AI into the business of exposing connections among objects begged the question: What if AI could become a curator? While we agree with the “1000 interns” notion—AI is amazing for automating mundane tasks, not necessarily for replacing highly skilled humans—we felt like we had to try. With our curatorial partners we developed the notion of training an AI to build subjective connections, much like a curator would.

alt

A seriously fancy plate, and… a spooky armchair advertisement?

Was it possible? Sort of. Our experiments showed both promise and danger. Is an ad on a pedestal really “spooky?” And isn’t the AI just reflecting the trainer’s bias? Parallel experiments in asking the AI to extract concepts or understand images were also confounding. Yes, we could now sort objects that looked like cars (hooray?), but why did the AI think a headshot of a racecar driver was either a “plunger” or, worse, a “punching bag” (ouch)?! Moreover, the most successful sketches largely “uncovered” data that was already found in collections management.

For better or worse our prototypes showed us that we were asking the wrong questions. Were we really suggesting the AI could look at a set of images and give us back the same stories as someone who understands design history, cultural context, and more? More importantly, what about an artificial curator was authentic to The Henry Ford?

Finding an Authentic Narrative

Our teams remained excited about AI and, at the same time, resolved that this was not an AI experience. It was about the non-obvious stories and connections between objects. How might we source those? The resourceful, ingenious curatorial staff decided to play out our process internally, with curators instead of machines. The result was an “a-ha” for this project.

alt

The Henry Ford curator mind maps, led by Ryan Jelso and Katherine White

The curators held a workshop where they built a mind map on large posters. Starting with a “hero object”, like the Model T or the Lincoln Chair, what subjective concepts is it connected to? What objects are connected to these concepts? And so on, ad infinitum. They left each hero poster up for weeks and, beautifully, they continued to grow and expand. The model proved open-ended and expansive as well as provocative and, occasionally, funny. More importantly, it was uniquely The Henry Ford. After all, where else might you find robust representations of “hacks,” “product mascots,” and “civil rights?”

Now that we had a content framework, our next challenge was to translate these beautiful, dense, static diagrams into a light, playful experience. Time for another workshop, of course.

Bodystorming as UX Accelerator

We’ve written and posted extensively about “bodystorming”—the practice of role-playing an interaction at scale. Our team loves combining paper, tech, and people to actively design, and the Table introduced a perfect opportunity to use this process to translate the work of the curators into an interactive experience.

We hosted The Henry Ford at our Philadelphia studio, and held a day-long, multi-part design session with a (relatively) simple process:

  • Share concept maps by the curators
  • Brainstorm UX ideas through sketching in small groups
  • Build a set of most promising ideas with tracing paper, printed artifacts, and stickers—at scale
  • Bodystorm the paper experiences
Bodystorming a drawing gesture for discovering a connection

Bodystorming a drawing gesture for discovering a connection

Each group had similar takeaways, in slightly different packages: this experience needed to be fun, with no wrong way to interact; the mind maps offered a powerful jumping-off point, and AI provided a way to expand the map and add serendipity, introducing moments of beauty, surprise, and expansion. In short, we borrowed a phrase from Calm Technology’s principles, aiming to build an experience to “amplify the best of technology and the best of humanity.”

A paper prototype of an AI “surprise moment” — and our favorite death-related donut

A paper prototype of an AI “surprise moment”—and our favorite death-related donut

Sketching in Code: UX

From here we turned again to prototypes. How might we translate our ethos—powerful, explorative, but fun—into the user experience?

Let’s start with UX. All of our workshop sketches incorporated a node-based approach that mirrored the initial mind maps. But how would these pieces interact with each other? Can it be fun?

Maybe connected objects are drawn to you, like magnets? Pretty, but confusing.

Maybe connected objects are drawn to you, like magnets? Pretty, but confusing.

Or perhaps they have some springy-ness? This felt like progress.

Or perhaps they have some springy-ness? This felt like progress.

Should those springs be solid, or could they be more fluid? Maybe not this fluid.

Should those springs be solid, or could they be more fluid? Maybe not this fluid.

How sudden should the AI “expansion” feel? Definitely not quite this sudden.

How sudden should the AI “expansion” feel? Definitely not quite this sudden.

With each code sketch we started to narrow in on interactions and animations that resonated. Our next prototype was more tactical: how do we not fill the Table immediately with stuff? After all, a group experience where one person can immediately take over is… not a group experience. 

Pretty quickly, we landed on the idea of drawing plus shrinking.

Draw, expand, and (wait for it) shrink — a process for success

Draw, expand, and (wait for it) shrink—a process for success

Our final interface. Drawing! Physics! Slow, responsive shrinking!

Our final interface. Drawing! Physics! Slow, responsive shrinking!

We had landed on a UI system that we felt captured the fun we were looking for. And yet, as with each piece of our process, we still were challenged by AI. Would it also be able to deliver on our dreams, or was it just an epic red herring?

Sketching in Code: AI

Our goal from here was to ensure that the AI helped us scale. We couldn’t expect the curators to write tens of thousands of connections. From our earliest work, we knew that objects grouped by visual similarity were beautiful to navigate and could be translated into smaller visual “gradients” between two objects. However, we quickly ran into the hard limits of existing code libraries, which all worked in this model: give us two objects, and we will give you back a gradient between three and eight-ish objects. As designers, we wanted a bit more control; as people who like fun, we also wanted to be able to stretch these connections much, much further.

Our next prototype is one of my favorites. We took a “lily pad” approach, where the AI could “leap” from object to object and generate a visual gradient as long (or short) as we wanted. So, between a telephone and a ketchup bottle, it might leap to a flashlight and a glass funnel—giving us back a pretty legible, if weird, line of 15 objects. While still often mysterious, it was a success!

An early example of some long connections… with a bug that featured some repeats.

An early example of some long connections…with a bug that featured some repeats. 

Where else could we add some visual serendipity? Could we use color to build a different type of bridge between two objects? Yes! However, there was a lot of brown, and some pretty lackluster (and non-accessible color contrast) gradients; we tweaked our code to filter out the most saturated and distant dominant colors, yielding ever-better results.

Our first visualization of the entire collection by color

Our first visualization of the entire collection by color 

More promising prototypes: saturated gradients

More promising prototypes: saturated gradients

You may be sensing a theme here: the AI worked best when it was tailored (by a human) to the collection. Finally, we were on the right track.

Prototype, Test, Repeat

Alongside all of these prototypes were full- and part-scale mockups of the whole Table itself. At Bluecadet we always work with final hardware, allowing us to dial in our work without too many on-site surprises. We set up the real table in our office in Philly, and built a miniature mockup in our smaller New York studio. As we built, tested, and iterated, we invited The Henry Ford back to Philly to experience the table at scale. We extended the atmosphere that kicked the project off into our reviews—we invited open-ended exploration and had our clients respond to prompts while they played.

Clients and Cadets exploring all at once

Clients and Cadets exploring all at once

An Ongoing Experiment

The Table at The Henry Ford staff playtesting—with mind maps in the background

The Table at The Henry Ford staff playtesting—with mind maps in the background

Our final product represents a sum of a huge amount of sketches and prototypes, and (we think) a perfect overlap between the visions of our team and The Henry Ford. The interface echoes the mind maps of the curators, and transforms them into something that you can’t help but play with.

We’re thrilled with the end product and know we’ve only seen the tip of the iceberg as far as the use of AI within the cultural space—and even within The Henry Ford. Many of our prototypes brought up more questions than we could answer: How can advances in AI treatment of archival images, e.g. super up-resing and colorization, lead to new curatorial discoveries? What might happen with a re-trained AI “curator” now that we have a large dataset of concepts? Might AI allow for more robust digital collections at institutions that don’t have the staff resources? 

AI will lead to a general rethinking and reimagining of digital collections, and will open up new ways to share, experience, remix, explore, and discover museums in person and online. At the same time, we firmly believe our experience with The Henry Ford will remain relevant. AI is one of many tools in the creative and storytelling toolbox, not a panacea. It requires near-constant critical evaluation, a willingness to experiment, and an understanding that AI is always most powerful in collaboration with humans. Striking that balance is a process, but we’ve found it to be as fun as it is vital.

Connections Table

Back to News and Insights