As an Amazon Associate I earn from qualifying purchases.

The intersection of design and science

[ad_1]

During the prototyping stages of the journey that brought Echo Show 10 to life, the design, engineering, and science teams behind it encountered a surprise: one of their early assumptions was proving to be wrong.

The feature that most distinguishes the current generation from its predecessors is the way the device utilizes motion to automatically face users as they move around a room and interact with Alexa. This allows users to move around in the kitchen while consulting a recipe, or to move freely when engaging in a video call, with the screen staying in view.

Naturally, or so the team thought, users would want the device to remain facing them, matching where they were at all times. “You walk from the sink to the fridge, say, while you’re using the device for a recipe, the device moves with you,” David Rowell, principal UX designer said. Because no hardware existed, the team had to create a method of prototyping, so they turned to virtual reality (VR). That approach enabled Echo Show 10 teams to work together to test assumptions — including their assumption about how the screen should behave. In this case, what they experienced in VR made them change course.

“We had a paradigm that we thought worked really well, but once we tested it, we quickly discovered that we don’t want to be one-to-one accurate,” said David Jara, senior UX motion designer. In fact, he said, the feedback led them to a somewhat unexpected conclusion: the device should actually lag behind the user. “Even though, from a pragmatic standpoint, you would think, ‘Well, this thing is too slow. Why can’t it keep up?’, once you experienced it, the slowed down version was so much more pleasant.”

This was just one instance of a class of feedback and assumption-changing research that required a team of designers, engineers, software developers, and scientists to constantly iterate and adapt. Those teams spent many months hypothesizing, experimenting, learning, iterating, and ultimately creating Echo Show 10, which was released Thursday. Amazon Science talked to some of those team members to find out how they collaborated to tackle the challenges of developing a motorized smart display and device that pairs sound localization technology and computer vision models.

From idea to iteration

“The idea came from the product team about ways we could differentiate Echo Show,” Rowell said. “The idea came up about this rotating device, but we didn’t really know what we wanted to use it for, which is when design came in and started creating use cases for how we could take advantage of motion.”

The design team envisioned a device that moved with users in a way that was both smooth and provided utility.

Adding motion to Echo Show was a really big undertaking. There were a lot of challenges, including how do we make sure that the experience is natural.

Dinesh Nair, applied science manager

That presented some significant challenges for the scientists involved in the project. “Adding motion to Echo Show was a really big undertaking,” said Dinesh Nair, an applied science manager in Emerging Devices. “There were a lot of challenges, including how do we make sure that the experience is natural, and not perceived as creepy by the user.”

Not only did the team have to account for creating a motion experience that felt natural, they had to do it all on a relatively small device. “Building state-of-the-art computer vision algorithms that were processed locally on the device was the greatest challenge we faced,” said Varsha Hedau, applied science manager.

The multi-faceted nature of the project also prompted the teams to test the device in a fairly new way. “When the project came along, we decided that that VR would be a great way to actually demonstrate Echo Show 10, particularly with motion,” Rowell noted. “How could it move with you? How does it frame you? How do we fine tune all the ways we want machine learning to move with the correct person?”

Behind each of those questions lay challenges for the design, science, and engineering teams. To identify and address those challenges, the far-flung teams collaborated regularly, even in the midst of a pandemic. “It was interesting because we’re spread over many different locations in the US,” Rowell said. “We had a lot of video calls and VR meant teams could very quickly iterate. There was a lot of sharing and VR was great for that.”

Clearing the hurdles

One of the first hurdles the teams had to clear was how to accurately and consistently locate a person.

“The way we initially thought about doing this was to use spatial cues from your voice to estimate where you are,” Nair said. “Using the direction given by Echo’s chosen beam, the idea was to move the device to face you, and then computer vision algorithms would kick in.”

The science behind Echo Show 10

A combination of audio and visual signals guide the device’s movement, so the screen is always in view. Learn more about the science that empowers that intelligent motion.

That approach presented dual challenges. Current Echo devices form beams in multiple directions and then choose the best beam for speech recognition. “One of the issues with beam selection is that the accuracy is plus or minus 30 degrees for our traditional Echo devices,” Nair observed. “Another is issues with interference noise and sound reflections, for example if you place the device in a corner or there is noise near the person.” The acoustic reflections were particularly vexing since they interfere with the direct sound from the person speaking, especially when the device is playing music. Traditional sound source localization algorithms are also susceptible to these problems.

The Audio Technology team addressed these challenges to determine the direction of sound by developing a new sound localization algorithm. “By breaking down sound waves into their fundamental components and training a model to detect the direct sound, we can accurately determine the direction that sound is coming from,” said Phil Hilmes, director of audio technology. That, along with other algorithm developments, led the team to deliver a sound direction algorithm that was more robust to reflections and interference from noise or music playback, even when it is louder than the person’s voice.

Rowell said, “When we originally conceived of the device, we envisioned it being placed in open space, like a kitchen island so you could use the device effectively from multiple rooms.” Customer feedback during beta testing showed this assumption ran into literal walls. “We found that people actually put the device closer to walls so the device had to work well in these positions.” In some of these more challenging positions, using only audio to find the direction is still insufficient for accurate localization and extra clues from other sensors are needed.

Echo Show 10 designers initially thought it would be placed in open space, like a kitchen island. Feedback during beta testing showed customers placed it closer to walls, so the teams adjusted.

The design team worked with the science teams so the device relied not just on sound, but also on computer vision. Computer vision algorithms allow the device to locate humans within its field of view, helping it improve accuracy and distinguish people from sounds reflecting off walls, or coming from other sources. The teams also developed fusion algorithms for combining computer vision and sound direction into a model that optimized the final movement.

That collaboration enabled the design team to work with the device engineers to limit the device’s rotation. “That approach prevented the device from turning and basically looking away from you or looking at the wall or never looking at you straight on,” Rowell said. “It really tuned in the algorithms and got better at working out where you were.”

The teams undertook a thorough review of every assumption made in the design phase and adapted based on actual customer interactions. That included the realization that the device’s tracking speed didn’t need to be slow so much as it needed to be intelligent.

“The biggest challenge with Echo Show 10 was to make motion work intelligently,” said Meeta Mishra, principal technical program manager for Echo Devices. “The science behind the device movement is based on fusion of various inputs like sound source, user presence, device placement, and lighting conditions, to name a few. The internal dog-fooding, coupled with the work from home situation, brought forward the real user environment for our testing and iterations. This gave us wider exposure of varied home conditions needed to formulate the right user experience that will work in typical households and also strengthened our science models to make this device a delight.”

Frame rates and bounding boxes

Responding to the user feedback about the preference for intelligent motion meant the science and design teams also had to navigate issues around detection. “Video calls often run at 24 frames a second,” Nair observed. “But a deep learning network that accurately detects where you are, those don’t run as fast, they’re typically running at 10 frames per second on the device.”

That latency meant several teams had to find a way to bridge the difference between the frame rates. “We had to work with not just the design team, but also the team that worked on the framing software,” Nair said. “We had to figure out how we could give intermediate results between detections by tracking the person.”

By breaking down sound waves into their fundamental components and training a model … we can accurately determine the direction that sound is coming from.

Phil Hilmes, director of audio technology

Hedau and her team helped deliver the answer in the form of bounding boxes and Kalman filtering, an algorithm that provides estimates of some unknown variables given the measurements observed over time. That approach allows the device to, essentially, make informed guesses about a user’s movement.

During testing, the teams also discovered that the device would need to account for the manner in which a person interacted with it. “We found that when people are on a call, there are two use cases,” Rowell observed. “They’re either are very engaged with the call, where they’re close to the device and looking at the device and the other person on the other end, or they’re multitasking.”

The solution was born, yet again, from collaboration. “We went through a lot of experiments to model which user experience really works the best,” Hedau said. Those experiments resulted in utilizing the device’s CV to determine the distance between a person and Echo Show 10.

“We have settings based on the distance that the customer is from the device, which is a way to roughly measure how engaged a customer is,” Rowell said. “When a person is really up close, we don’t want the device to move too much because the screen just feels like it’s fidgety. But if somebody is on a call and multitasking, they’re moving a lot. In this instance, we want smoother transitions.”

Looking to the future

The teams behind the Echo Show 10 are, unsurprisingly, already pondering what’s next. Rowell suggested that, in the future, the Echo Show might show a bit of personality. “We can make the device more playful,” Rowell said. “We could start to express a lot of personality with the hardware.” [Editor’s note: Some of this is currently enabled via APIs; certain games can “take on new personality through the ability to make the device shake in concert with sound effects and on-screen animations.”]

Nair said his team will also focus on making the on-device processing even faster. “A significant portion of the overall on-device processing is CV and deep learning,” he noted. “Deep networks are always evolving, and we will keep pushing that frontier.”

“Our teams are working continuously to further push the performance of our deep learning models in corner cases such a multi-people, low lighting, fast motions, and more,” added Hedau.

Whatever route Echo Show goes next, the teams behind it already know one thing for certain: they can collaborate their way through just about anything. “With Echo Show 10, there were a lot of assumptions we had when we started, but we didn’t know which would prove true until we got there,” Jara said. “We were kind of building the plane as we were flying it.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo