top of page

What is Series #3. What is TensorFlow Lite?

TensorFlow Lite is a set of tools to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and small binary size.

TensorFlow Lite consists of two main components:

  • The TensorFlow Lite interpreter, which runs specially optimized models on many different hardware types, including mobile phones, embedded Linux devices, and microcontrollers.

  • The TensorFlow Lite converter, which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.

Machine learning at the edge

TensorFlow Lite is designed to make it easy to perform machine learning on devices, "at the edge" of the network, instead of sending data back and forth from a server. For developers, performing machine learning on-device can help improve:

  • Latency: there's no round-trip to a server

  • Privacy: no data needs to leave the device

  • Connectivity: an Internet connection isn't required

  • Power consumption: network connections are power-hungry

TensorFlow Lite works with a huge range of devices, from tiny microcontrollers to powerful mobile phones.

Microcontrollers and TinyML

Microcontrollers, such as those used on Arduino boards, are low-cost, single-chip, self-contained computer systems. They’re the invisible computers embedded inside billions of everyday gadgets like wearables, drones, 3D printers, toys, rice cookers, smart plugs, e-scooters, washing machines. The trend to connect these devices is part of what is referred to as the Internet of Things.

Arduino is an open-source platform and community focused on making microcontroller application development accessible to everyone. The board we’re using here has an Arm Cortex-M4 microcontroller running at 64 MHz with 1MB Flash memory and 256 KB of RAM. This is tiny in comparison to Cloud, PC, or Mobile but reasonable by microcontroller standards.

There are practical reasons you might want to squeeze ML on microcontrollers, including:

Function — wanting a smart device to act quickly and locally (independent of the Internet).

  • Cost — accomplishing this with simple, lower-cost hardware.

  • Privacy — not wanting to share all sensor data externally.

  • Efficiency — smaller device form-factor, energy-harvesting or longer battery life.

30 views0 comments

Recent Posts

See All


bottom of page