A Judge’s Notebook: 2018 SXSW™ Interactive Innovation Awards

  [  It was my privilege to be a part of the judging panel for a third consecutive year .]

[ It was my privilege to be a part of the judging panel for a third consecutive year.]

The SXSW Conference is an epicenter of overlapping technology trends, industry interests, dedicated entrepreneurs, and far-reaching visions. There’s simply too much content, presented in too little time, to take it all in. This fact makes the Interactive Innovation Awards particularly notable as a concentrated snapshot of the state of technology. (NOTE: opinions reflected are personal and denote no official status or endorsement from SXSW).

The Big Picture

If there is one overriding takeaway that marks the 2018 Interactive Innovation Awards as a crossroads it’s this: the vast majority of the finalists (and perhaps a number of others that just missed) are at a much more mature, developed state than in recent years. Many of the finalists in 2016 and 2017 were at earlier stages; prototypes in beta mode, or developed tech in search of viable use cases were both common. In 2018, the entrants were deeper into the development cycle.There were completed projects, pilot programs running with corporate partners, products on the market, and more. At the same time—while there were exceptions—many of the finalists and winners still achieved great success with relatively small teams.

The breakout trend for SXSW 2018 is that humanity is globally connected. The spirit of collaboration was evident throughout the finalist showcase.

Judging in Brief

Only the conference organizers know the exact numbers associated with the judging process. Finalists emerge after a round of preliminary judging in which individual entrants are evaluated across four criteria: Creativity / Innovation, Form / Design, Function / Utility, and Overall Experience. Each judge also states if the entrant is a good fit for the conference. 

The judges then make their final selections on the day of the finalist showcase at the conference by simple ballot. There are thirteen categories in all, plus a best in show. Here are some of the author’s firsthand insights from the experience.

Winning Propositions

While the diversity and breadth of finalists makes overriding pronouncements difficult, there were visible threads common to several winning approaches. The presence of both bigger and deeper thinking united a number of winners. These were a few cases where the entrants looked beyond a singular case to create breakthrough projects.

One of the most impressive projects from any category was the winner for Smart Cities: The Jacques-Cartier Bridge Interactive Illumination Concept (https://momentfactory.com/work/all/all/jacques-cartier-bridge-illumination). A team of seven businesses and governmental agencies captured the pulse of the city of Montreal through a massive LED light installation on this iconic bridge in the center of the city. Built for a 10-year lifespan, the bridge expresses the mood and energy of the city through lightshows informed by both social media and 1,700 individual news and data sources. This is data visualization writ large, with an innovative human fingerprint.

The Hands-Free Music Project by Microsoft enables what was once unthinkable: to allow paralyzed individuals to spontaneously make and play music.

Wearable Tech is another category that reached new limits in 2018. While some entrants in past years resembled more of a sci-fi costume than a practical, wearable garment, this year’s winner
Jacquard™ by Google ( http://atap.google.com/jacquard/ ) elevated the category. The innovation involves conductive threads literally woven into the fabric of garments. The test case: a denim jacket marketed by Levis™, was first and foremost a garment; the tech was unobtrusive. The garment links to the wearer’s smartphone, and uses simple touch gestures on the jacket to interact with the app. Touches and directional swipes answer or decline phone calls, advance music selections, adjust volume, and more. The use of a washable, woven-in conductive circuit enables a new level of interaction for any number of garment types, from high fashion to industrial safety.

The combined VR & AR category also yielded a surprising innovation. Developed through the production of a 3D animated VR feature film Arden’s Wake, the winner: Maestro: Empowering VR Storytelling Through Social Collaboration by Penrose Studios ( http://www.penrosestudios.com/ ), brought a new vision to producing such projects. 3D animated features frequently involve collaboration between teams across not just countries but continents. The Maestro software allows hands-on, VR collaboration from remote locations within the actual VR worlds of the production. The makers facilitated a demo during the Award showcase in which anyone could get a firsthand tour of the software with the feature film’s director and lead animator right on the production’s VR “set.” With VR projected to grow dramatically in the coming years, this represents a breakthrough development for makers in the field.

One additional entry worthy of note is the Hands-Free Music Project by Microsoft (https://www.microsoft.com/en-us/research/project/microsoft-hands-free-music/ ). This project illustrates both an exceptional level of empathy (as a music-making technology for those immobilized by illness or accident) and a complex, interdisciplinary solution. Using eye-tracking technology, real-time inputs and composition tools, and linked instruments (including a live, robotic drum kit), the project enables what was once unthinkable: to allow paralyzed individuals to spontaneously make and play music. Microsoft has made a strong commitment to inclusive design; the Hands-Free Music Project is an outcome worthy of its recognition as the winner for Music Innovation.

Toughest Category: Visual Media Experience

Perhaps no category exemplifies both the growth and quality of the awards competition better than Visual Media Experience. Not only were the finalists of consistently high quality and diverse approaches, but also even a number of non-finalists arguably achieved finalist-worthy outcomes.

It’s particularly noteworthy that the category winner diverged sharply from a number of its competing entrants. While a number of entrants achieved high standards through immersive, architectural installations and experiences, the LEGO™ House Fish Designer by LEGO House/Trigger Global ( http://www.triggerglobal.com/work/lego-house-fish-designer ) was decidedly more intimate and personal. The hands-on creation and simple input process enables any individual to build a LEGO fish, and add it to a virtual, HD fish tank. A simple scanner and animation program convert the object into a virtual, swimming fish.

The LEGO™ House Fish Designer was decidedly more intimate and personal than the immersive, architectural installations and experiences of competing finalists.

One additional means of defining the competitive nature of the category is by exploring entries which fell short of finalist considerations. Two separate immersive experiences showcase creativity, craftsmanship, and engagement for visitors. “Prismverse” (http://xex.com.hk/prismverse/) is an audiovisual art installation created for the skincare brand Dr.Jart+ for its campaign called “Light Now. Right Now.” AVA V2 ( https://vimeo.com/188716447  ) was featured at TEDxCERN 2016. Inspired by the iconic dome structures of Buckminster Fuller and particle physics, AVA V2 reflects experiments in particle physics and cosmic rays. Each uniquely explore the possibilities of sensory experiences.

Although in a separate category—SciFi No Longer—the finalist Google™ Earth VR (https://vr.google.com/earth/ ) also illustrates the high standard of the awards competition. Its presentation of iconic world heritage locations such as the Matterhorn and Machu Picchu in high definition VR is stunning, and perhaps as recently as a year ago would have deservedly won its category. Yet at this point in time, the project may appear as a more incremental add-on to Google Earth. 

Insights and Honorable Mentions

A big winner in the competition is the Swarm AI by Unanimous AI ( https://unanimous.ai/what-is-si/). Stating that it “provides the interfaces and algorithms to enable “human swarms” to converge online, combining the knowledge, wisdom, insights, and intuitions of diverse groups into a single emergent intelligence....”, the platform has produced some impressive results. Its winning results in both the AI category and as Best-in-Show testify to its quality.

if inputs depend on a curated community of subject enthusiasts, does this fall outside the perception of an independent AI?

However, there are two factors that may indicate its sophistication is as yet undetermined. Its online communities have indeed made some spectacular predictions as a group—each beyond the reach of any individual that participated. The correct prediction of the 2017 Kentucky Derby Superfecta (the top four horses in finishing order) is one—a prediction that yielded an $11,000 payoff on a $20 bet. However, this raises an important question: if the inputs depend on a curated community of subject enthusiasts (although notably, below the mastery of recognized experts), does this fall outside the perception of an AI reaching independent conclusion? It may also be premature to project its initial accomplishments (many relating to sports and entertainment prognostication) to broader impact without broader testing. Despite these caveats, the entry remains an impressive piece  of research.

There are three other entrants that deserve discussion (in the author’s opinion). Two were category finalists, the last missed the cut:

TinyMOS: Astrophotography made small, smart and social by Y&R Singapore ( http://tinymos.com/). An impressive combination of hardware and software that makes astronomy accessible, it was all the more impressive for being a finalist in two categories: Responsive Design and Student Innovation.

The Cognitive Story by Darwin Ecosystem, Dallas, TX ( http://darwinecosystem.com/cognitive-stories/ ). This machine learning system helps limited individuals (who can’t talk, type, or use eye-tracking technology) to communicate by identifying brainwave patterns that detect intentions and express them to people or systems. It also dramatically simplifies the necessary tech to make it practical outside of research laboratories.

Miner’s Walk by Josephine Lie ( http://www.josephinelie.com/miners-walk ). Miner’s Walk is an interactive documentary exploring the lives of Indonesian miners who trek the steep slopes of the Ijen Crater in search of sulphur. It uses short and long form video, an interactive timeline, and multiple points of view reminiscent of a National Geographic style of storytelling.

A Celebration of Creativity

Regardless of the designations of winner’s and losers, the 2018 SXSW Interactive Innovation Awards celebrate the creativity of diverse teams, concepts and approaches. The showcase gives all attendees who experience it  an invaluable glimpse at emerging technology. 

Reader’s can judge for themselves what merits each finalist exhibited here: 

https://www.sxsw.com/news/2018/announcing-2018-interactive-innovation-awards-finalists/

The award winners are listed here:

https://www.sxsw.com/interactive/2018/announcing-2018-winners-interactive-innovation-awards/

 


: : Contact Tom Berno directly at tb.idea21@gmail.com for more information

A Microphone, a Speaker, and an Internet Connection Walk into a Bar...

 

 EXPLORING VOICE AI AT SXSW INTERACTIVE 2018

 

Both AI (Artificial Intelligence) and voice interfaces were hot topics at this year’s SXSW Interactive conference. While the technology still falls short of the independently-cognitive vision familiar to science fiction fans, progress has been rapid, and opportunity is expanding. Two important presentations from this year’s conference: Crafting Conversations: Design in the Age of AI, and The Role of Voice in Music Discovery, captured important and notably contrasting approaches to designing for voice-enabled interfaces.

4.9 billion devices running Voice AI in 2016 will grow to a projected 21 billion by 2020.

In Crafting Conversations, Google™ Conversation Design Lead Daniel Padgett summarized the foundation of design for voice and highlighted the focuses that guide design practices for Google Home. “I teach robots to talk...” stated Padgett, and he positioned the state of voice interfaces as marking a stark contrast to earlier stages of computer technology in which “we had to learn to speak to the computer in its native language.” Indeed, to anyone working on early technologies such as punch cards and command line inputs, today’s voice-enabled devices must seem almost miraculous.

INDUSTRY TRENDS

Padgett’s views on the growth of Voice AI reveal much about Google’s broader strategy. He stressed the speed and simplicity of voice queries—comparing them to the number of “taps” necessary for text input for even simple searches—as well as the ubiquity of the service. These map well to Google’s core brand values exemplified by its clean white search landing page. He also cited statistics illustrating the category’s growth: 400 million-plus devices running Google Assistant alone, a sales volume of one Google voice-enabled device per second from October 2017–January 2018, and a platform supporting 22 languages (also a core competency for Google).

In The Role of Voice in Music, SoundHound™ Inc. VP and General Manager Katie McMahon expressed contrasting views of the state of Voice AI. While Padgett emphasized the evolution of the technology, McMahon framed the current point in Voice AI development as a generational one. She stated that while the year 2000 defined the “Touch-Tap-Swipe generation,” 2015 marked “Gen V: the voice-first generation.” She also identified 2017 as a tipping point in the development voice-enabled AI, much as 2007 marked the takeoff for mobile-first UX/UI strategies. She noted the coming growth of the category as well, from approximately 4.9 billion devices running voice AI in 2016 to a projected 21 billion by 2020.

GOOGLE’S SEARCH FOR A VOICE AI DESIGN PROCESS

Google’s approach to designing for Voice AI focus on the cognitive approach to human conversation. Padgett outlined four broad considerations to explain his team’s design approach for voice.

The first involves modeling conversation. Central to this is the Cooperative Principle developed by Paul Grice in 1975. Grice emphasized four “maxims” that facilitated effective conversation by building cooperation between speakers. Quality (truth), quantity (the right amount), relevance (the topic at hand), and manner (clarity) are all important to keep conversation engaged. Other linguistic cues, such as turn taking, questions, silence and even gesture also inform our conversations. Indeed, Padgett emphasized the need for clear and concise inquiries to make best use of Voice AI.

 

Google has optimized language processing to a word error rate of 4.9%, making this a solved issue.

The second consideration is in knowing the two speakers involved in Voice AI conversation. The first is the human side. Padgett described the human in the conversation as a “hands-busy/eyes-busy/multitasker.” Their personas identified them as “instant experts” with high standards and low tolerance for error in their use of Voice AI. They are happiest when acting within what they instinctively know and do in conversation. The flip side is the Voice AI itself. Padgett astutely describes this as literally “the voice of your brand.” As such, it deserves a specific role and even its own backstory to establish it as a core brand channel.

Google’s third consideration is the toolkit for Voice AI—addressing the nature of the voice signal itself, and the ability to recognize and understand speech. The spoken word, according to Padgett, is both always moving and by nature ephemeral: always fading (best illustrated by a game of Pass It On). Google constructs its Voice AI responses specifically to the ephemeral nature of speech: answering the primary query and then adding prompts to explore further information. And while Google has optimized language processing to a word error rate of 4.9%, there remains development to resolve the dilemma of what someone says combined with the intuitive interpretation of how they said it.

Lastly Google considers the expanding ecosystem of technology and communication. It aspires to “design for the overlaps” between voice only, voice-forward, intermodal, and visual communication and function.

SOUNDHOUND’S EXPANDING VOICE ECOSYSTEM

SoundHound appears to take a more holistic—and perhaps innovative—approach to the use of Voice AI. From its position as a leader in music discovery, it has developed a self-contained ecosystem for Voice AI enabled devices and applications. SoundHound continues to focus on music, but has integrated two additional applications: Houndify™—a Voice AI platform, and Hound™–an AI-enabled Voice Assistant.

The Houndify AI offers two strategic technology advances to voice queries relative to other platforms such as the Amazon Echo™/Alexa™, Apple’s SIri™ or OK Google. 

The first of these is a different model of query described by McMahon as “compound/complex.” While Padgett stressed the ideal of simple, concise questions, this still limits utility, and remains at the level of “speaking in the computer’s language.” The Houndify AI can handle queries with both inclusions and exclusions. An example would be “OK Hound, find me a restaurant within 3 miles but not a pizza place,” or “find me a flight next week to Chicago but not on United Airlines.” The answers generated by Houndify—while more lengthy and detailed than Google’s assistant, are also more specific. This is also a more intuitive manner of voice search for people. People often know more about what they aren’t looking for when they’re in browsing mode.

 

SoundHound users can search, discover and play music using voice commands instead of clicking, texting, tapping or swiping.

The second tech innovation for Houndify involves what McMahon called “Speech to Meaning.” This involves integrating the two primary Machine Learning aspects of Voice AI: Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU). By making these data sets interoperable, the interactions between human and AI are more seamless and organic.

SoundHound displays its own penchant for innovation in use of voice technology. Through UX research, SoundHound discovered that the number one reason users abandoned its platform was because navigating the app was frustrating. Rather than taking a visual UX/UI approach to address this problem, SoundHound looked at voice-driven navigation as a better, more user-focused solution. Now, SoundHound users can search, discover and play music using voice commands instead of clicking, texting, tapping or swiping. The company’s willingness to go beyond an incremental move exemplifies the innovative DNA of a company comfortable in applying the principles of design thinking and agile development.

BRAND VOICES

McMahon echoed Padgett’s endorsement of brand as an important dimension of Voice AI. Here, Houndify also diverges from Google’s strategy. While Google enjoys the strength of its brand and Android™ ecosystem. Houndify is adaptive for use by an independent brand to create and integrate Voice AI functionality and applications. Houndify and Hound don’t own a brand of device or system, and thus becomes an enabling platform, McMahon continued. This makes Houndify a potentially valuable partner for brands that prefer to amplify their own voice through the technology. This open source offers additional flexibility by being adaptable across devices.

 

Houndify is a potentially invaluable partner for brands that prefer to amplify their own voice.

Houndify’s flexibility gives designers and companies an additional dimension of choice to consider when integrating voice AI. Companies my prefer the brand halo of Alexa, Google, or Siri as an amplifying feature. Or they may see a potential competitive advantage in creating their own Voice AI presence—one that’s unique to their brand.

WHAT’S NEXT

Padgett indicated that Google’s strategy addresses both the static placement of in-home smart speakers as well as mobile devices. Each has unique operating conditions, levels of privacy, and utility for the user. Google’s expansion into smart displays (also being developed by Amazon™, Panasonic™, and others) also tips their hand. It’s clear that they see an integration of voice and visual browsing, particularly in the home environment.

Padgett also emphasized the need for better use of linguistics, creative writing, and script writing as part of the UX toolkit for voice. McMahon countered that “with little or no UI, systems need to become smarter.” It is clear that this portends an advantage for the systems best able to automate Machine Learning and expand AI capabilities.

These are still early days for Voice AI, and while there are early leaders, it seems that there is still ample time to develop best practices and claim leadership in multiple markets.

 


: : Contact Tom Berno directly at tb.idea21@gmail.com for more information

Strategic convergence comes to big technology brands.

Screen Shot 2018-02-23 at 9.10.05 AM.png
Screen Shot 2018-02-23 at 9.33.24 AM.png
Screen Shot 2018-02-23 at 9.21.22 AM.png
Modernist, san-serif typography makes the logotypes of Google, Spotify, and Pinterest nearly identical.

It's a given in brand communications that differentiation (i.e., standing out from the crowd) is a top priority. Yet there is a clear trend amongst a number of the largest scale tech companies in which a certain sameness appears in their approach to brand logotypes. In a recent post on its Co.Design web portal, Fast Company highlights this trend in the article "Why Do Google, Airbnb, And Pinterest All Have Such Similar Logos?"

The article identifies a number of possible explanations, which undoubtedly have merit. Among those is the observation that a logo no longer equals the brand. This definitely true, although hardly a new insight. Marty Neumeier made this idea a central pillar to his seminal book on brand building: The Brand Gap. Other contributing factors identified by the various experts quoted in the article include a necessary simplicity, unity across UI elements, and a focus on the broader visual programs each brand creates. "So much of the identity now is defined by a lot of elements and experiences that surround the logo, that are supporting it" stated Howard Belk, co-CEO at Siegel + Gale. It is also true that each brand identity integrates a symbol, or in Google's case, a monogram, that help differentiate each brand.

However the article does not go further into examining the underlying condition: that strategic convergence in big tech brands is rampant. Strategic convergence occurs when a significant number of players in an industry or market establish similar strategic approaches. In the case at hand, one sees strategic convergence evident in the visual approach to brand typography in identity. Indeed, the modernist, san-serif typography makes the logotypes of Google, Spotify, and Pinterest nearly identical, as is the new AirBnB identity logotype.

 The extended look and feel of the AirBnB brand system creates its unique personality.

The extended look and feel of the AirBnB brand system creates its unique personality.

When an industry arrives at a state of strategic convergence (often described as "best practices"), it creates barriers to innovation, as more players seek to adopt the strategy choices of others. It's ironic that many of these companies, while they enjoy reputations as being innovative or disruptive, found such a similar approach desirable.

One possible explanation is less perceived risk; the success of the approach for key companies makes it appealing on its own. Drilling down further, by reflecting the design of logotypes from iconic companies like Google et. al., a new entity acquires just a little of the former's brand halo. The similar appearance of one company to another transfers just a little of the established company's trustworthiness—another brand imperative along with differentiation. Again from the Fast Company article: "All these bold and neutral logos are telling the consumer the same message: Our brand and our services are simple, straight-forward, and clear. And extremely readable.” Thierry Brunfaut, creative director and founding partner at Base Design.

It should also be said that there is no evidence that any of these companies made a deliberate decision to emulate another. One of the conditions that defines strategic convergence is that players frequently arrive at the same or similar conclusions independently. Neither are these conclusions limited to issues of visual brand design.

There is also another, more urgent risk to any company residing in a state of strategic convergence: those companies are ripe for disruption. By failing to look deeper at ways to create a unique identity and personality, companies leave a host of potential competitive advantages on the table. The "bold, neutral" approach above may be simple, and even effective, but mostly falls short of the sustainable advantage enjoyed by truly iconic brands, via their associated identities.

It is clear that thoughtful design can meet the baseline requirements for design compatible with 21st century technology and an appropriate level of simplicity. One example of design that—intentionally or not—defies the strategic convergence of technology branding comes from Medium. The selection of a serif font, vs. a san serif, immediately distinguishes the brand in a unique and memorable way.

 Medium's visual brand standards illustrate an approach that counters the status quo.

Medium's visual brand standards illustrate an approach that counters the status quo.

Companies should always deeply examine their goals and purposes, and be vigilant to avoid slipping into the false comfort of strategic convergence. When it comes to brand identity, moving away from the current status quo offers a much greater prospect of building a more compelling and dynamic personality. This offers the best opportunity for a company to truly own its market and insulate itself from encroaching disruption.

Contact Tom Berno to continue this discussion and find out more about how your brand can evolve beyond its competition.