Amazon unveils DeepComposer, an AI-enabled piano keyboard

‘s re:Invent 2019 conference kicked off with a bang or rather, with product announcements made during a midnight keynote at The Venetian. The Seattle company’s Amazon Web Services (AWS) division unveiled Amazon Transcribe Medical, a new edition of its automatic speech recognition service that lets developers add medical speech-to-text capabilities to their apps, and it debuted DeepComposer, which enables AWS customers to compose music using AI and a physical (or virtual) MIDI controller.

On the Transcribe side of the equation, Amazon Transcribe Medical offers an API that integrates with voice-enabled apps and works with most microphone-equipped devices. It’s designed to transcribe medical speech for primary care, Amazon says, and to be deployed “at scale” across “thousands” of healthcare facilities to provide secure note-taking for clinical staff. It supports both medical dictation and conversational transcription, and like the standard Amazon Transcribe, Transcribe Medical features conveniences like automatic and “intelligent” punctuation.

Transcribe Medical is fully managed in that it doesn’t require provisioning or management of services it sends back a stream of text in real time. Moreover, it’s covered under AWS’ HIPPA eligibility and business associate addendum (BAA), meaning that any customer that enters into a BAA with AWS can use Transcribe Medical to process and store personal personal health information (PHI).

Amazon says that already, Amgen and SoundLines are using Transcribe Medical to produce text transcripts from recorded notes and feed transcripts into downstream analytics. “For the 3,500 healthcare partners relying on our care team optimization strategies for the past 15 years, we’ve significantly decreased the time and effort required to get to insightful data,” said SoundLines president Vadim Khazan in a statement.

Transcribe Medical’s launch in general availability comes months after AWS made three of its AI-powered, cloud-hosted products Translate, Comprehend, and Transcribe eligible under the Health Insurance Portability and Accountability Act of 1996, or HIPAA. It’s the principal law providing data privacy and security provisions for medical information in the U.S.

In a somewhat related reveal this morning, AWS detailed DeepComposer, which it calls the “world’s first” machine learning-enabled musical . It’s a 32-key, two-octave keyboard designed for developers to try their hand at either pretrained or custom AI models.

Budding composers first record a short musical tune (or use a prerecorded one) before selecting a model for their favorite genre, as well as the model’s architecture parameters and the loss function (which is used during training to measure the difference between the algorithm’s output and expected value). Next, they choose  hyperparemeters (parameters whose values are set before the learning process begins) and a validation sample, after which DeepComposer produces a  composition that can be played in the AWS console or exported or shared on SoundCloud.

As AWS AI and machine learning evangelist Julien Simon explains in a blog post, DeepComposer taps a generative model to fill in compositional gaps in songs. A generator component draws on random data to create samples that it forwards to a descriminator bit, which learns to distinguish genuine samples from fake samples. As the descriminator improves, so does the generator, such that the generator progressively learns how to create samples closer to those that are genuine.

Developers can apply to receive a DeepComposer keyboard once it becomes available, or use the new virtual keyboard in the AWS console.

Lastly, AWS launched Amazon SageMaker Operators for Kubernetes, which lets data scientists using Kubernetes train, tune, and deploy AI models in Amazon’s SageMaker machine learning development platform. AWS customers can install SageMaker Operators on Kubernetes clusters to create Amazon SageMaker jobs natively using the Kubernetes API and command-line Kubernetes tools.

Specifically, users can make calls to SageMaker that kick off services like Managed Spot Training, which distributes model training to reduce training time by scaling to multiple nodes with graphics chips. Compute resources are pre-configured and optimized, only provisioned when requested and scaled as needed and shut down automatically when jobs complete. Additionally, hyperparameters are optimized automatically, and fully trained models are deployed to fully managed autoscaling clusters spread across multiple data centers.

“Now with Amazon SageMaker Operators for Kubernetes, customers can continue to enjoy the portability and standardization benefits of Kubernetes … along with integrating the many additional benefits that come out-of-the-box with Amazon SageMaker, no custom code required,” wrote AWS Deep Learning senior product manager Aditya Bindal in a press release.

Amazon SageMaker Operators for Kubernetes are generally available AWS server regions including US East (Ohio), US East (N. Virginia), US West (Oregon), and EU (Ireland) AWS Regions.

You might also like
Leave A Reply

Your email address will not be published.