- This event has passed.
tinyML Summit 2022
March 28, 2022 @ 8:00 am - March 30, 2022 @ 5:00 pm PDT
Registration will open by January 31
What seemed like a dream a few years ago is quickly turning into reality. Problems which used to require Gigabytes to solve, became Megabytes and then Kilobytes. Join us at the tinyML Summit 2022 to take part in the sharing, learning, and celebrating tinyML.
With ever more pervasive advances in technology and algorithms, tinyML is rapidly becoming a reality. In the tinyML community, we stand on the shoulders of giants. It is the incredibly open and collaborative nature of ML technology which allows this field to advance so quickly. From its inception in 2019, the tinyML community has grown tremendously and has benefited greatly by supporting one another. We all hold unique pieces of the puzzle, but it is through leveraging the collective knowledge of the community that we can move together more quickly.
In conjunction with the Summit, the tinyML Research Symposium 2022 will be held on Monday, March 28.
VENUE & ACCOMMODATIONS
The cut-off date is Monday, March 7th, 2022. After March 7 rooms may still be available but may have a higher prevailing rate.
Five sessions have been planned focusing on several different areas related to tinyML. You will be hearing from selectively invited speakers from each area:
Talk to Me! TinyML opportunities in smart audio
Session Leader: Chris Rowen, Cisco
Audio is a uniquely attractive target for Tiny machine learning. On one hand, audio, especially speech, hosts richly layered information on communication intent, identity, location, emotion and events. On the other hand, it is densely encoded so this rich diversity of information is surprisingly hard to extract and interpret. Machine Learning methods prove remarkably effective, opening up countless applications in speech recognition, event detection, voice trigger, speech transformation and generation. The relatively low bit-rates for audio make it possible to capture and process audio in small power budgets, and the privacy and latency concerns often make it necessary to concentrate audio processing at the edge – a perfect storm for tinyML audio opportunity.
tinyML Vision: Efficient design of tiny vision models for IoT devices
Session Leader: Song Han, MIT
Computer vision is popular application for tiny machine learning. Many vision applications require real-time, low-latency processing, and privacy is a big priority, thus recognizing the images/videos locally on tiny edge devices has a lot of opportunity. However, the large computation and memory footprint poses a challenge on these low-power devices, which goes worse as the resolution requirement gets larger. Opportunities including model compression and neural network and accelerator co-design opens up a larger design space for tiny machine learning for vision, which has a large market in smart home, smart factory, smart driving, smart healthcare, and more.
tinySensing: Scaling tinyML solutions with optimized edge sensing capabilities
Session Leader: Steve Whalley, tinyML Board of Directors member
As more is done on the edge with less power, code and real estate, what does this mean for sensor requirements to stay aligned with the explosive tinyML growth? Going beyond popular tinyML applications in camera sensing for vision and MEMS microphones for audio, will traditional inertial, environmental, medical sensing, etc. capabilities suffice? What new innovations and ‘smart capabilities’ be required to squeeze more performance, bandwidth, lower power, memory, analog and ASIC capability into edge sensing solutions? Will changes in Machine Learning methods and tools be needed? Will printed sensing, with the promise of tiny footprints and power, be the future darling of tinyML? This session will delve into some of the opportunities, challenges and trends in the tinySensing world and provide a rigorous dialog on what is needed.
tinyHardware – there’s plenty of room at the bottom
Session Leader: Francesco Conti, University of Bologna
tinyHardware stands at the crossroads between algorithm, technology, and architecture. Aggressive algorithmic optimizations such as quantization, pruning, and analog computation do not destroy Machine Learning’s capability to decode and interpret information – rather, they introduce smooth degradations that can often be tolerated. On the other hand, these optimizations offer a large “attack surface” for emerging technology such as non-volatile memory, analog in-memory computing, and innovative chip-to-chip links – as well as for novel architecture in terms of dataflow choices, sparsity support, core/memory coupling and caching schemes. Altogether, these opportunities define a huge design space spanning from low-bitwidth ISA extensions to multi-chip systolic arrays to System-on-Chips with core-coupled in-memory accelerators – a space that we have just started to navigate in the quest to raise system-level energy efficiency of tinyML applications while enabling more complex Tiny applications.
tinySW/Tools: Enabling tiny experiences for the real-world
Session Leader: Alessandro Grande, Edge Impulse
Software and tools specifically designed for tiny machine learning (tinyML) are empowering researchers, data scientists and embedded engineers to pave the way for a whole new class of experiences. Whereas the focus was initially on creating software and tools to make each step of the process possible — on one hand enabling data scientists to create datasets, train and optimize models able to run on resource-constrained devices, on the other hand helping embedded engineers use the models to develop applications capable of efficiently performing inference on the microcontrollers themselves — we are now starting to see a shift towards software and tools that allow scientists and engineers to leverage tinyML in real-world use cases. Opportunities lie in tightly integrated end-to-end platforms, new data augmentation methods, performance analysis tools, as well as optimization tool techniques. This tinySW/Tools session will explore some of the recent advancements in the field, while uncovering some of the challenges we are facing as an industry.
Session Leader: Danilo Pau, STMicroelectronics
Latest generation microcontrollers, sensors, digital signal processors, and ultra-low power accelerometers are opening numerous possibilities in compact machine learning (ML) applications with limited storage and power consumption. Now, the over 10 million C developers of the embedded community are calling for useful productivity tools to support ML pipeline design from the earliest concept and design phases. The tools should automate ML topology design and optimize configuration without requiring developers to craft new solutions for each issue.
This session will focus on tools for Automated Tiny ML design and Hyper Parameter Search. Industrial and university experts will discuss the current and next-generation tools for embedding ML in small, everyday applications. This is an excellent opportunity for attendees to learn how to deploy their models in tiny devices and ramp up ML design productivity.