Processors - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/processors/ Designing machines that perceive and understand. Thu, 05 Oct 2023 12:17:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Processors - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/processors/ 32 32 How NVIDIA and e-con Systems are Helping Solve Major Challenges In the Retail Industry https://www.edge-ai-vision.com/2023/10/how-nvidia-and-e-con-systems-are-helping-solve-major-challenges-in-the-retail-industry/ Thu, 05 Oct 2023 12:17:04 +0000 https://www.edge-ai-vision.com/?p=44324 This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. e-con Systems has proven expertise in integrating our cameras into the NVIDIA platform, including Jetson Xavier NX / Nano / TX2 NX, Jetson AGX Xavier, Jetson AGX Orin, and NVIDIA Jetson Orin NX / NANO. …

How NVIDIA and e-con Systems are Helping Solve Major Challenges In the Retail Industry Read More +

The post How NVIDIA and e-con Systems are Helping Solve Major Challenges In the Retail Industry appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems.

e-con Systems has proven expertise in integrating our cameras into the NVIDIA platform, including Jetson Xavier NX / Nano / TX2 NX, Jetson AGX Xavier, Jetson AGX Orin, and NVIDIA Jetson Orin NX / NANO. Find out about how our cameras are integrated into the NVIDIA platform, their popular use cases, and how they empower you to solve retail challenges.

In the retail industry, there are numerous challenges, including security risks, inventory management, and enhancing the shopping experience. NVIDIA-powered cameras are helping to address these challenges by providing retailers with real-time data and insights. In addition, these cameras are being used to enhance store security, optimize store layout and staffing, etc.

So, by leveraging the power of the NVIDIA platform, retailers can better understand their customers while improving operations and ultimately providing a more satisfying shopping experience.

In this blog, let’s discover more about the role of e-con Systems’ cameras integrated into the NVIDIA platform, how they help solve some major retail challenges, and their most popular use cases.

Read: e-con Systems launches 3D time of flight camera for NVIDIA Jetson AGX Orin and AGX Xavier

A quick introduction to NVIDIA and e-con Systems’ cameras

NVIDIA has been involved in developing camera sensors for various applications, focusing on AI-powered edge computing and autonomous vehicles. One of their most notable releases is the Jetson Nano Developer Kit (released in 2019). This System-on-Module (processor) is designed for AI-powered edge computing applications like object recognition and autonomous shopping.

As you may already know, e-con Systems has proven expertise in integrating our cameras into the Nvidia platform. We support the entire NVIDIA Jetson family, including Jetson Xavier NX / Nano / TX2 NX, Jetson AGX Xavier, Jetson AGX Orin, and NVIDIA Jetson Orin NX / NANO. e-con Systems’ popular camera solutions come with advanced features, such as dedicated ISP, ultra-low-light performance, low noise, wide temperature range, LED flicker mitigation, bidirectional control, and long-distance transmission.

Benefits of using cameras powered by the NVIDIA platform

    • They work seamlessly with their powerful GPUs, which are optimized for processing large amounts of data in real time. This allows for advanced image processing and analysis, making it possible for machines to “see” and understand their surroundings with greater accuracy and speed.
    • They are capable of capturing high-quality data that can be used to train deep neural networks. So they can then be used for tasks such as object detection and recognition.
    • They are designed to be low-power and compact, making them ideal for use in embedded vision applications. This is particularly important for applications such as smart trolleys and smart checkout systems.
    • They are highly customizable, letting developers tailor them to specific applications and use cases. This flexibility makes it possible to create embedded vision solutions that are optimized for specific tasks and environments, providing better performance and reliability.

Read: Popular embedded vision use cases of NVIDIA® Jetson AGX Orin™

Major retail use cases of NVIDIA and e-con Systems

Smart Checkout

e-con Systems’ cameras, powered by the NVIDIA platform, are transforming smart checkout systems by enabling faster, more accurate, and more efficient checkout experiences for customers. Firstly, they can be used to enable contactless checkout, reducing the risk of transmission of infectious diseases. So, customers can avoid touching checkout equipment and interacting with cashiers, reducing the risk of transmission.

These smart checkout systems usually refer to a camera-enabled automated object detection system at the billing or checkout counter. They can operate autonomously with limited supervision from human staff – offering benefits like effective utilization of the retail staff, enhanced shopping experience, data insights on shopping patterns, and more. The integrated camera is equipped with smart algorithms to detect a wide variety of objects in a retail store.

Read: Key camera-related features of smart trolley and smart checkout systems

Smart Trolley

NVIDIA cameras are changing the game for retailers by providing real-time insights into customer behavior and preferences through the use of smart trolleys. These trolleys equipped with cameras and sensors help identify products or the barcode on each item – enabling the customers to pay in the same cart. This can greatly reduce wait times and improve overall customer satisfaction.

Moreover, the data collected by these cameras can enable retailers to offer personalized product recommendations and promotions based on past purchases and interactions. This personalized approach can increase sales and customer loyalty.

Another significant advantage of NVIDIA cameras in smart trolleys is enhanced store security. The cameras can detect and track suspicious activity in real time, such as items being removed from trolleys without payment or abandoned trolleys blocking store aisles.

Read: How embedded vision is contributing to the smart retail revolution

Other retail use cases include:

    • Optimized store operations and improved inventory management: With real-time data on store traffic and product placement, retailers can make informed decisions about store layout, staffing, and inventory management, leading to more efficient operations and reduced costs.
    • Personalized shopping experiences for customers: By analyzing customer behavior through imaging detail and preferences, retailers can offer personalized product recommendations and promotions. In turn, this leads to increased sales and customer satisfaction.

As the technology continues to evolve, it is likely that we will see even more innovative applications of NVIDIA-powered cameras in the retail industry.

NVIDIA and e-con Systems: An ongoing multi-year Elite partnership

NVIDIA and e-con Systems together have formed a one-stop ecosystem – providing USB, MIPI, GMSL, GigE, and FPD Link camera solutions across several industries and significantly reducing time-to-market. This multi-year Elite partnership started with Jetson Nano (40 TOPS) and continues strong with AGX Orin (100 TOPS).

Explore our NVIDIA Jetson-based cameras

If you are looking for an expert to help integrate NVIDIA cameras into your embedded vision products, please write to camerasolutions@e-consystems.com. You can also check out our Camera Selector page to get a full view of e-con Systems’ camera portfolio.

Ranjith Kumar
Camera Solution Architect, e-con Systems

The post How NVIDIA and e-con Systems are Helping Solve Major Challenges In the Retail Industry appeared first on Edge AI and Vision Alliance.

]]>
FRAMOS Launches Event-based Vision Sensing (EVS) Development Kit https://www.edge-ai-vision.com/2023/10/framos-launches-event-based-vision-sensing-evs-development-kit/ Wed, 04 Oct 2023 14:30:44 +0000 https://www.edge-ai-vision.com/?p=44293 [Munich, Germany / Ottawa, Canada , 4 October] — FRAMOS launched the FSM-IMX636 Development Kit, an innovative platform allowing developers to explore the capabilities of Event-based Vision Sensing (EVS) technology and test potential benefits of using the technology on NVIDIA® Jetson with the FRAMOS sensor module ecosystem. Built around SONY and PROPHESEE’s cutting-edge EVS technology, …

FRAMOS Launches Event-based Vision Sensing (EVS) Development Kit Read More +

The post FRAMOS Launches Event-based Vision Sensing (EVS) Development Kit appeared first on Edge AI and Vision Alliance.

]]>
[Munich, Germany / Ottawa, Canada , 4 October] — FRAMOS launched the FSM-IMX636 Development Kit, an innovative platform allowing developers to explore the capabilities of Event-based Vision Sensing (EVS) technology and test potential benefits of using the technology on NVIDIA® Jetson with the FRAMOS sensor module ecosystem.

Built around SONY and PROPHESEE’s cutting-edge EVS technology, this developer kit simplifies the prototyping process and helps companies reduce time to market.

Event-based Vision Sensing (EVS)

Unlike conventional sensors that transmit all visible data in successive frames, the EVS sensor captures only the changed pixel data, specifically luminance changes. Each event package includes crucial information: pixel coordinates, timestamp, and polarity, resulting in efficient bandwidth usage.

By reducing the transmission of redundant data, this technology lowers energy consumption and optimizes processing capacities, reducing the cost of vision solutions.

EVS sensors provide high-speed and low-latency data output. They give outstanding results in monitoring vibration and movement in low-light conditions.

The FSM-IMX636 Development Kit consists of an IMX636 Event-based Vision Sensor board with a lens, all necessary adapters, accessories, and drivers, crafted into a comprehensive, easy-to-integrate solution for testing EVS in embedded applications systems on NVIDIA® Jetson AGX Xavier™ and NVIDIA® Jetson AGX Orin platforms.

The PROPHESEE Metavision® Intelligence Suite provides machine learning-supported event data processing, analytics, and visualization modules.

FRAMOS’ new Development Kit is an affordable, simple to use, and intelligent platform for testing, prototpying, and faster launch of diverse EVS-based applications in in a wide range of fields, including industrial automation, medical field, automotive and mobility, and IoT and monitoring.

For more information, visit this link.

About FRAMOS

FRAMOS® is the leading global expert in vision systems, dedicated to innovation and excellence in enabling devices to see and think.

For more than 40 years, the company has supported clients worldwide in building pioneering vision systems.

Throughout all phases of vision system development, from hardware and software solutions to component selection, customization, consulting, prototyping, and mass production, companies worldwide rely on FRAMOS proven expertise.

Thanks to its engineering excellence and a large base of loyal clients, the company operates successfully on three continents.

Over 180 experts working in Munich, Ottawa, Zagreb, and Čakovec offices commit themselves to developing cutting-edge imaging solutions for various applications across various industries.

For more information, please visit www.framos.com or follow us on LinkedIn, Facebook, Instagram or Twitter.

 

The post FRAMOS Launches Event-based Vision Sensing (EVS) Development Kit appeared first on Edge AI and Vision Alliance.

]]>
“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere https://www.edge-ai-vision.com/2023/10/optimizing-image-quality-and-stereo-depth-at-the-edge-a-presentation-from-john-deere/ Wed, 04 Oct 2023 08:00:49 +0000 https://www.edge-ai-vision.com/?p=44222 Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo… “Optimizing Image Quality and Stereo …

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere Read More +

The post “Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere appeared first on Edge AI and Vision Alliance.

]]>
Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo…

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere appeared first on Edge AI and Vision Alliance.

]]>
CircuitSutra Technologies Demonstration of Virtual Prototyping for Pre-silicon Software Development https://www.edge-ai-vision.com/2023/10/circuitsutra-technologies-demonstration-of-virtual-prototyping-for-pre-silicon-software-development/ Tue, 03 Oct 2023 13:31:41 +0000 https://www.edge-ai-vision.com/?p=44268 Umesh Sisodia, President and CEO of CircuitSutra Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Sisodia demonstrates a virtual prototype of an ARM Cortex-based SoC, developed using SystemC and the CircuitSutra Modelling Library (CSTML). It is able to boot Linux …

CircuitSutra Technologies Demonstration of Virtual Prototyping for Pre-silicon Software Development Read More +

The post CircuitSutra Technologies Demonstration of Virtual Prototyping for Pre-silicon Software Development appeared first on Edge AI and Vision Alliance.

]]>
Umesh Sisodia, President and CEO of CircuitSutra Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Sisodia demonstrates a virtual prototype of an ARM Cortex-based SoC, developed using SystemC and the CircuitSutra Modelling Library (CSTML). It is able to boot Linux and is suitable to software development.

CircuitSutra provides SoC modeling services and supports its customers in adopting SystemC-based shift-left ESL methodologies. These methodologies enable hardware/software co-design, pre-silicon firmware development through virtual prototypes, architecture optimization for variables such as performance and power consumption, and high level synthesis.

The post CircuitSutra Technologies Demonstration of Virtual Prototyping for Pre-silicon Software Development appeared first on Edge AI and Vision Alliance.

]]>
DeGirum Demonstration of Streaming Edge AI Development and Deployment https://www.edge-ai-vision.com/2023/10/degirum-demonstration-of-streaming-edge-ai-development-and-deployment/ Mon, 02 Oct 2023 17:53:00 +0000 https://www.edge-ai-vision.com/?p=44255 Konstantin Kudryavtsev, Vice President of Software Development at DeGirum, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Kudryavtsev demonstrates streaming edge AI development and deployment using the company’s JavaScript and Python SDKs and its cloud platform. On the software front, DeGirum …

DeGirum Demonstration of Streaming Edge AI Development and Deployment Read More +

The post DeGirum Demonstration of Streaming Edge AI Development and Deployment appeared first on Edge AI and Vision Alliance.

]]>
Konstantin Kudryavtsev, Vice President of Software Development at DeGirum, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Kudryavtsev demonstrates streaming edge AI development and deployment using the company’s JavaScript and Python SDKs and its cloud platform.

On the software front, DeGirum continues to prioritize user experience and adaptability. The company has launched a user-friendly Python SDK and will soon launch a JavaScript SDK. The upcoming SDK promises seamless real-time operations directly from browsers, a testament to DeGirum’s commitment to enhancing accessibility and ease-of-use for developers worldwide.

In the demo, Kudryavtsev showcases DeGirum’s JavaScript SDK executing Yolo-based face detection directly from the browser using a local AI accelerator. Concurrently, he demonstrates the company’s Python SDK running Yolo-based hand detection via the cloud. Both SDKs assist with the preprocessing and postprocessing tasks, as well as improving efficiency. And both implementations utilize local camera feeds and display real-time processing.

The post DeGirum Demonstration of Streaming Edge AI Development and Deployment appeared first on Edge AI and Vision Alliance.

]]>
Cadence Demonstrations of Generative AI and People Tracking at the Edge https://www.edge-ai-vision.com/2023/10/cadence-demonstrations-of-generative-ai-and-people-tracking-at-the-edge/ Mon, 02 Oct 2023 17:50:24 +0000 https://www.edge-ai-vision.com/?p=44251 Amol Borkar, Director of Product and Marketing for Vision and AI DSPs at Cadence Tensilica, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Borkar demonstrates two applications running on customers’ SoCs, showcasing Cadence’s pervasiveness in AI. The first demonstration is of …

Cadence Demonstrations of Generative AI and People Tracking at the Edge Read More +

The post Cadence Demonstrations of Generative AI and People Tracking at the Edge appeared first on Edge AI and Vision Alliance.

]]>
Amol Borkar, Director of Product and Marketing for Vision and AI DSPs at Cadence Tensilica, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Borkar demonstrates two applications running on customers’ SoCs, showcasing Cadence’s pervasiveness in AI.

The first demonstration is of people detection and tracking, performed on MediaTek’s Genio 500 SoC. As part of the demo, the camera captures an image and a person is detected using an objection detection AI network, followed by an OpenPose network which in realtime is able to overlay a skeletal representation of the person’s pose. The AI computation required for this demo is performed on MediaTek’s integrated APU (AI Processing Unit), which is comprised of 2x Tensilica Vision P6 DSPs. Additionally, a servo controlled articulating arm moves the camera to actively track the person’s movement.

The second demonstration is of a large image language model in an industrial camera, provided by Labforge.ca. Unlike typical LLM demos, this demo takes images and text prompts as input and provides a confidence level that the prompts match the captured image in realtime. The “Bottlenose” camera system features an SoC by Toshiba that integrates 4x Tensilica Vision P6 DSPs to help assist and accelerate the performance of today’s generative AI models. The Vision P6 DSP handles pre/post-processing, various CV/imaging algorithms, and multiple layers of the neural network.

The post Cadence Demonstrations of Generative AI and People Tracking at the Edge appeared first on Edge AI and Vision Alliance.

]]>
From ‘Smart’ to ‘Useful’ Sensors https://www.edge-ai-vision.com/2023/10/from-smart-to-useful-sensors/ Mon, 02 Oct 2023 08:01:05 +0000 https://www.edge-ai-vision.com/?p=44241 Can anyone make ‘smart home’ devices useful to consumers? A Calif. startup, Useful Sensors, believes it has the magic potion. What’s at stake? Talk of edge AI, particularly machine learning, has captivated the IoT market. Yet, actual consumer products with local machine learning capabilities, are rare. Who’s ready to pull that off? Will it be …

From ‘Smart’ to ‘Useful’ Sensors Read More +

The post From ‘Smart’ to ‘Useful’ Sensors appeared first on Edge AI and Vision Alliance.

]]>
  • Can anyone make ‘smart home’ devices useful to consumers? A Calif. startup, Useful Sensors, believes it has the magic potion.

  • What’s at stake? Talk of edge AI, particularly machine learning, has captivated the IoT market. Yet, actual consumer products with local machine learning capabilities, are rare. Who’s ready to pull that off? Will it be a traditional MCU supplier or an upstart — like Useful Sensors?

  • Tech jargons like “smart home” and “smart sensor” have been overused to the point where real value that might be delivered by the related technologies reaches most non-techie consumers largely as fog.

    Why, for instance, would any sensible person fiddle with apps, options and swipes on a smartphone to turn off the light when there’s a simple switch within reach?

    Here’s the good news. We found an executive who shares our skepticism about smart homes.


    Pete Warden, Useful Sensors CEO

    Pete Warden is CEO and co-founder of Useful Sensors (Mountain View, Calif.).

    Warden pointedly named his 2022 startup Useful — rather than “Smart” — Sensors. When he spins his dream of a home equipped with Useful Sensors’ hardware and software solutions, Warden calls it “a magical house,” not a smart home.

    In Warden’s magical house, one simply enters a room, looks at a light and says “On.” And then there is light. “Or your TV will pause when you get up to have a cup of tea,” said Warden.

    Useful Sensors’ brainstorm is “low-cost, easy-to-integrate hardware modules that bring machine learning capabilities like gesture recognition, presence detection, and voice interfaces to TVs, laptops, and appliances while preserving users’ privacy,” he explained.

    Warden said, “Consumers might not even know that there is AI in everyday consumer devices. But those devices just do the right thing. That’s why I like the idea of ‘useful,’ instead of ‘smart.’ Smart is kind of like showing off.”

    Emphasizing the importance of usefulness in technologies, he added, “I hope it to be in the background and doing what you want.”

    Deep learning inference on embedded devices is a rapidly growing field with many potential applications. Edge AI is the buzz at every technology conference. Virtually every MCU supplier from STMicroelectronics and Renesas to Silicon Labs is promoting a new MCU with AI capabilities.

    Ex-Google

    Many MCU suppliers try to tackle from a hardware angle the broad challenge of implementing machine learning on resource-constrained processors. They see the name of the game is in running AI stack as efficiently as possible on their tailored hardware, while substantially reducing power consumption.

    Warden, in contrast, sees these challenges and thinks software. After all, he takes credit for coining the term “TinyML,” the application of machine learning in a tiny footprint within embedded platforms.

    Prior to Useful Sensors, Warden was at Google, as one of the members of the original TensorFlow team. But after his team got TensorFlow running well on phones, Warden got busy with TensorFlow Lite Micro, a machine learning framework for embedded systems.

    His obsession with running machine learning on embedded devices only intensified. With Qualcomm senior director Evgeni Gousev, Warden launched the TinyML conference four years ago.

    Eager to evangelize TinyML, Warden also wrote the standard embedded ML text book, called TinyMl, published by O’Reilly.

    He also teaches embedded machine learning at Stanford University.

    ‘Don’t make us think’

    Naturally, Warden and his team were very excited about the potential of ML applications in various consumer devices. They figured they were doing everything necessary for system OEMs to embrace TensorFlow Lite Micro.

    But the real-world reaction to Warden’s pitch was tepid.

    “When I went to talk to light switch manufacturers or TV manufacturers to try and get them to adopt this open-source framework made available in TensorFlow Lite Micro, they listened to me” politely, said Warden. He offered “free code, examples, courses, books, everything else available to system companies.”

    But at the end, Warden’s audience typically said, “Look, we barely have a software engineering team. We do not have a Machine Learning Team. Can you just give us the thing that does a voice interface?” Or, “something that’s ready-made?”

    Warden: “They literally told us, ‘Don’t make us think. Don’t make us code.’”

    After a series of rebuffs, Warden said he realized, “What we needed to do was to put together something that was as easy to integrate as a temperature or pressure or an accelerometer sensor. Those sensors should give people machine learning capabilities.”

    This was a scheme that would require Google to release hardware, an unlikely prospect. “Google wouldn’t want to risk the company’s whole reputation behind hardware products.”

    So, bye bye, Google, and on to Useful Sensors.

    Building blocks of Useful Sensors’ offerings

    Typically, Useful Sensors offers a small module with a sensor on one side and a little microcontroller opposite.

    A “Person Sensor” comes with a small camera and a small MCU. The module is preloaded with software to detect people. It includes a I²C interface. The processing cores used in the module are all off-the-shelf, said Warden, such as Synopsys’ Arc processing core or Arm Cortex M, “to drive down the cost.”

    Useful Sensors are also mindful of privacy issues. “All one can get on our module is the metadata,” said Warden. “So, it can tell, for example, there are three people in the room.”

    As with any consumer electronics device, the tricky part is figuring out the user experience, explained Warden. It’s a challenge to make Useful Sensors’ features “both discoverable and seamless” in consumer appliances. Warden’s theoretical example of subtle consumer symbiosis: “Imagine your TV … when you sat down, it just booted what you were watching last.”

    ‘We do Transformer’

    Useful Sensors’ software expertise shines especially when squeezing software into off-the-shelf hardware.

    Warden boasts that his team knows how to do stuff on $2 hardware that would require someone else a $20 chip.”

    Another advantage of Useful Sensors’ software expertise is its nimbleness in keeping up with rapid advancements in machine learning. “For example, we use Transformers,” said Warden. Transformers is a new building block for machine learning models, different from convolutional learning models familiar to many people already using AI.

    “T” in Chat GPT stands for Transformers, as it deploys the Generative Pre-trained Transformer 3.5 (GPT-3.5).

    Because Transformer employs a different kind of compute, “a lot of the fixed function neural network accelerators that people have brought out in the last year or two, do not support Transformers,” explained Warden. “So it’s an example of why having some degree of programmability is really important for these kinds of hardware, because what people want to run is changing so fast.”

    Who has signed up so far?

    Useful Sensors has signed Non-Disclosure Agreements and Evaluation agreements with several companies whose names Warden declined to reveal. But the company hopes to announce partnerships “over the next few months,” said Warden.

    For AI hardware startups to sign agreements with automotive companies is unimaginably tough – largely because so many startups don’t last. Working with consumer electronics companies is equally tough. Warden agreed. “CE companies have very high standards,” he said, adding: “I am really passionate about trying to do something that actually helps people’s everyday lives, instead of gimmicks. I believe consumer electronics is such a good way to reach people.”

    Warden noted that among the different Useful Sensors’ applications, the one gaining momentum most is “having local voice interfaces” that don’t require sending the voice request to the cloud. Functions from lighting to audio equipment and TVs work better with local voice control, but don’t have the capability today. “That’s why so few people actually use Alexa or Siri,” Warden noted.

    Undoubtedly, local voice will represent a big user experience challenge. Warden, said, optimistically, “There’s something magical about just being able to talk to all of the objects around your home … like a Disney house where you walk in and say. ‘hello, coffee pot!’”

    Bottom line

    Enabling edge AI is easier said than done. The crux of the issue is the lack of talent.  The hardest thing, either for an MCU supplier or a startup like Useful Sensors, is to find skilled people good at machine learning, who also know how to build hardware in the embedded space.

    Junko Yoshida
    Editor in Chief, The Ojo-Yoshida Report


    This article was published by the The Ojo-Yoshida Report. For more in-depth analysis, register today and get a free two-month all-access subscription.

    The post From ‘Smart’ to ‘Useful’ Sensors appeared first on Edge AI and Vision Alliance.

    ]]>
    The History of AI: How Generative AI Grew from Early Research https://www.edge-ai-vision.com/2023/09/the-history-of-ai-how-generative-ai-grew-from-early-research/ Fri, 29 Sep 2023 19:19:55 +0000 https://www.edge-ai-vision.com/?p=44212 This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. From how AI started to how it impacts you today, here’s your comprehensive AI primer When you hear Artificial Intelligence (AI): Do you think of the Terminator or Data on “Star Trek: The Next Generation”? While neither …

    The History of AI: How Generative AI Grew from Early Research Read More +

    The post The History of AI: How Generative AI Grew from Early Research appeared first on Edge AI and Vision Alliance.

    ]]>
    This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm.

    From how AI started to how it impacts you today, here’s your comprehensive AI primer

    When you hear Artificial Intelligence (AI): Do you think of the Terminator or Data on “Star Trek: The Next Generation”? While neither example of artificial general intelligence is available today, AI has given rise to a form of machine intelligence. And it’s trained on huge publicly available data, proprietary data and/or sensor data.

    As we embark on our AI on the Edge series, we’d like to explain exactly what we mean by “AI” — especially since this broad category can include everything from machine learning to neural networks and deep learning. Don’t be embarrassed: “What is AI, exactly?” requires a more technical and nuanced answer than you might expect.


    Alan Turing, an outstanding mathematician, introduced the concept of “Computing Machinery and Intelligence.”

    History of AI: The origins of AI research

    Answering “what is AI” is much easier when you know the history of AI.

    By the 1950s, the concept of AI had taken its first steps out of science fiction and into the real world as we began to build capable electronic computers. Researcher Alan Turing began to explore the mathematical possibility of building AI. He suggested that machines, like humans, can use information and reasoning to solve problems and make decisions.

    These concepts were introduced in his famous 1950 paper titled “Computing Machinery and Intelligence,” in which he discussed the potential for intelligent machines and proposed a test of their intelligence, now called the Turing Test. The test posits that if a machine can carry on a conversation (over a text interface) that is indistinguishable from a conversation with a human being, then it is reasonable to say that the machine is “thinking.” Using this simplified test, it is easier to argue that a “thinking machine” is at least plausible.


    The world’s first programmable, electronic, digital computers were limited in terms of performance.

    AI’s proof of concept

    In the 1950s, computers were still very limited and very expensive to own and operate, limiting their use in further AI research. Yet, researchers were not deterred. Five years later, a proof of concept was initiated with a program called Logic Theorist, likely the first AI program to be written. In 1956, the program was shown at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). This historic conference brought together top researchers from various fields for an open-ended discussion on AI, the term which was coined at the event by host John McCarthy, then a mathematics professor at Dartmouth.

    From 1957 to 1974, AI research flourished as computers could store more information and became faster, cheaper and more accessible. Machine learning algorithms also improved and people became better at knowing which algorithm to apply to their problem. But mainstream applications were few and far between, and AI research money began to dry up. The optimistic vision of AI researchers like Marvin Minsky in the ‘60’s and ‘70’s looked to be going nowhere.


    Deep learning took off due to increased processing capabilities, the abundance of data, and improved AI algorithms.

    Modern day leaps in AI

    Continued improvements in computing and data storage reinvigorated AI research in the 1980s. New algorithms and new funding fed an AI renaissance. During this period, John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience.

    This milestone was followed with certain landmark events. In 1997, IBM’s chess playing computer, Deep Blue, defeated reigning world chess champion and grandmaster Gary Kasparov. It was the first time a reigning world chess champion had lost to a computer. In the same year, speech-recognition software, developed by Dragon Systems, became widely available. In 2005, a Stanford robot vehicle won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. And just two years later, a vehicle from Carnegie Mellon University won the DARPA Urban Challenge by autonomously navigating 55 miles in an urban environment while avoiding traffic hazards and following all traffic laws. Finally, in February 2011, in a “Jeopardy!” quiz show exhibition match, IBM’s question answering system, named Watson, defeated the two greatest “Jeopardy!” champions of the day.

    Exciting as they were, these public demonstrations weren’t mainstream AI solutions. The DARPA challenges, though, did spur autonomous vehicle research that continues to this day.

    What really kicked off the explosion in AI applications was the use of math accelerators like graphics processing units (GPUs), digital signal processors (DSP), field programmable gate arrays (FPGA) and neural processing units (NPUs), which increased processing speeds by orders of magnitude over mere CPUs.

    While CPUs can process tens of threads, math accelerators like DSPs, GPUs, and NPUs process hundreds or thousands of threads all in parallel. At the same time, AI researchers also got access to vast amounts of training data through cloud services and public data sets.

    In 2018, large language models (LLMs) trained on vast quantities of unlabeled data became the foundation models that can be adapted to a wide range of specific tasks. More recent models, such as GPT-3 released by OpenAI in 2020, and Gato released by DeepMind in 2022, pushed AI capabilities to new levels. These generative AI models have made AI more useful for a much wider range of applications. Where previous uses of AI were mostly about recognition such as detecting bad parts in a product line, classification such as recognizing faces in a video feed, and prediction such as determining the path of an autonomous vehicle, generative AI can be used to create new text, images, or other content based on input prompts.


    Digital neurons, inspired by biological neurons, are the building blocks of digital neural networks.

    How AI works and AI technology definitions

    The fundamental approach of modern AI is inspired by the way that animal brains (including human brains) function using a digital neuron modeled after those of the biological brain. Collections of these digital neurons process an input in different layers with the results of each layer feeding the next layer. This structure is called a neural network. Each neuron has multiple inputs that are each given a specific weight. The weighted inputs are summed together, and the output is fed to an activation function. An activation function, such as the popular rectified linear unit, known as a ReLU, introduces the property of nonlinearity to a deep learning model. The outputs of the activation function are the inputs into the next layer of the neural network. The collective weights and any bias applied to the summation function represent the parameters of the model.

    Neural network architectures vary in the number of interconnected neurons per layer and the number of layers, which all impact accuracy at the cost of performance, power and size.


    A deep neural network consists of multiple hidden layers between the input and output layers.

    The deep in deep learning

    The “deep” in “deep learning” refers to the use of many layers in the network. Major increases in computing power, especially as delivered by GPUs, NPUs and other math accelerators, by around a thousand-fold or more make the standard backpropagation algorithm feasible for training networks that are many layers deeper and have reduced training times from many months to days.

    The values of digital neuron parameters are determined through a learning process. Humans learn throughout life from experiences and our senses. Because AI itself does not have life experiences or senses, it must learn through a digital imprint that’s called training.

    Neural networks “learn” (or are trained) by processing examples.

    In supervised learning, the examples contain known “inputs” and “outputs,” which form probability-weighted associations between the two in the digital neuron that are stored within the data structure of the neural network itself (called the “model”).

    The training of a neural network from a given example is usually conducted by determining the difference between the processed output of the network (often a prediction) and a desired output. Minimizing the difference between the prediction and the desired output is then used to adjust the network iteratively until it converges to the desired accuracy. This algorithm is called backpropagation.

    The more data you feed the neural network, the more examples it accumulates knowledge about. That said, the neural network model itself needs to be relatively large to represent complex sets of information.

    Also, a significant number of examples need to be used in training large models to make them more capable and accurate.

    The trained neural network model is then used to interpret new inputs and create new outputs. This application of the model processing new data is commonly called inference. Inference is where an AI model is applied to real-world problems.

    Training is often performed with 32-bit or 16-bit floating point math, but inference models can often be scaled down to 8-bit or 4-bit integer precision to save memory space, reduce power and improve performance without significantly affecting accuracy of the model. This scaling down is known as quantization, and going from 32-bit to 8-bit reduces the model size by one-fourth.

    A variety of neural network architectures have been introduced over time, offering benefits in performance, efficiency and/or capabilities.

    Well-studied neural network architectures, like convolutional neural networks (CNNs), recurrent neural networks (RNNs) and long short-term memory (LSTM) have been used to detect, classify, and predict and have been widely deployed for voice recognition, image recognition, autonomous vehicles and many other applications.

    A recently-popular class of latent variable models called diffusion models, can be used for a number of tasks including image denoising, inpainting, super-resolution upscaling, and image generation. This technique helped start the popularization of generative AI. For example, an image generation model would start with a random noise image and then, after having been trained to reverse the diffusion process on numerous images, the model would be able to generate new images based on text input prompts. A good example is OpenAI’s text-to-image model DALL-E 2. Other popular examples of text-to-image generative AI models include Stable Diffusion and ControlNet. These models are known as language-vision models or LVMs.

    Many of the latest LLMs such as Llama 2GPT-4 and BERT use the relatively new neural network architecture called Transformer, which was introduced in 2017 by Google. These complex models are leading to the next wave of generative AI where AI is used to create new content. The research into AI is ongoing and you should expect continual changes in architectures, algorithms and techniques.


    AI is being seamlessly integrated into our daily activities, like medical diagnostics, to enhance lives and improve outcomes.

    Real-time AI for everyone, everywhere

    Over the years there have been several leaps forward in the development of modern AI.

    It started with the idea that you could train neural networks with a process called deep learning that employed deeper layers of neurons to store more substantial information and represent more complex functions. Training these neural network models required a lot of computation, but advancements in parallel computing and more sophisticated algorithms have addressed this challenge.

    Running neural network training and inference through a DSP, FPGA, GPU, or NPU, made the development and deployment of deep neural networks more practical. The other big breakthrough for large-scale AI was access to large amounts of data through all the cloud services and public data sets.

    A complex and nuanced AI model requires lots of generalized data, which can be in the form of text, speech, images and videos. All these data types are fodder for neural network training. Using these vast troves of rich content to train neural networks has made the models smarter and more capable.

    Compressing that knowledge into more compact models is allowing them to be shared beyond the cloud and placed into edge devices. The democratization of AI is happening now.

    Pat Lawlor
    Director, Technical Marketing, Qualcomm Technologies, Inc.

    Jerry Chang
    Senior Manager, Marketing, Qualcomm Technologies

    The post The History of AI: How Generative AI Grew from Early Research appeared first on Edge AI and Vision Alliance.

    ]]>
    “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI https://www.edge-ai-vision.com/2023/09/reinventing-smart-cities-with-computer-vision-a-presentation-from-hayden-ai/ Fri, 29 Sep 2023 08:00:16 +0000 https://www.edge-ai-vision.com/?p=44128 Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk,… “Reinventing Smart Cities with Computer …

    “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI Read More +

    The post “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI appeared first on Edge AI and Vision Alliance.

    ]]>
    Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk,…

    “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI

    Register or sign in to access this content.

    Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

    The post “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI appeared first on Edge AI and Vision Alliance.

    ]]>
    SiP Market Soars on the Wings of Chiplets and Heterogeneous Integration https://www.edge-ai-vision.com/2023/09/sip-market-soars-on-the-wings-of-chiplets-and-heterogeneous-integration/ Thu, 28 Sep 2023 20:00:18 +0000 https://www.edge-ai-vision.com/?p=44176 This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The growth of the SiP market is propelled by the trends in 5G, AI, HPC, autonomous driving, and IoT. OUTLINE The SiP market is forecast to reach US$33.8 billion by 2028, showcasing …

    SiP Market Soars on the Wings of Chiplets and Heterogeneous Integration Read More +

    The post SiP Market Soars on the Wings of Chiplets and Heterogeneous Integration appeared first on Edge AI and Vision Alliance.

    ]]>
    This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

    The growth of the SiP market is propelled by the trends in 5G, AI, HPC, autonomous driving, and IoT.

    OUTLINE

    • The SiP market is forecast to reach US$33.8 billion by 2028, showcasing a robust 8.1% CAGR.

    • The growth of the SiP market is fueled by the increasing adoption of various technology trends, including heterogeneous integration, chiplet technology, package footprint reduction, and cost optimization, particularly within market segments such as 5G, AI , HPC , autonomous driving, and IoT.

    • The SiP supply chain is becoming increasingly competitive and emphasizes collaboration for optimal results: Asia dominates the market. OSATs are leading the competitive landscape.

    The SiP market was worth US$21.2 billion in 2022 and is projected to reach US$33.8 billion by 2028, growing at an 8.1% CAGR. This growth is fueled by trends like heterogeneous integration, chiplet technology, package optimization, and cost-efficiency, particularly in 5G, AI, HPC, autonomous driving, and IoT sectors. Yole Group’s analysts forecast that the mobile and consumer segment, which accounted for 89% of the 2022 revenues, will maintain a 6.5% CAGR, driven by 2.5D/3D technologies, HD FO , and FC/WB SiPs.

    “While mobile and consumer remain static in the overall semiconductor market, they are thriving in SiP due to 5G and computing trends. Telecom & infrastructure, automotive, and industrial sectors are the fastest-growing SiP markets, with telecom & infrastructure expecting 20.2% growth and automotive a 15.3% CAGR.”
    Yik Yee Tan, Ph.D.
    Senior Technology and Market Analyst, Packaging, Semiconductor, Memory, and Computing Division, Yole Intelligence (part of the Yole Group)

    In its new System-in-Package 2023 report, Yole Intelligence explores the SiP industry with market forecasts and trends.

    With this new product, the company analyses the related technologies and the trends toward 2.5/3D solutions. Indeed, SiP technology trends remain aggressive as the industry continues to demand more integration to allow reduced form factors and higher-performance products. In the mobile and consumer market, for example, footprint optimization is paramount because space is limited. This is particularly valid for smartphones, wearables, and other devices. For instance, the penetration of 5G in high-end smartphones has driven the adoption of SiP for RF and connectivity modules, with the need to integrate more components and shorten interconnections to achieve the required performance.

    Furthermore, this new report includes specific sections focused on the adoption of these technologies as well as details of the ecosystem, including the supply chain, competitive landscape, and market shares.

    “The SiP market is intensifying competition as SiP technologies gain prominence due to chiplets, heterogeneous integration, cost optimization, and footprint reduction trends, attracting more entrants.”
    Gabriela Pereira
    Technology and Market Analyst, Packaging, Semiconductor, Memory, and Computing Division, Yole Intelligence (part of the Yole Group)

    Indeed, the SiP supply chain is becoming increasingly competitive, and the focus is on collaboration for optimal results. More partnership between chip and memory players, foundries, and others are increasing, aiming to introduce cutting-edge technologies.

    So, what is the status of each strategic region? Asia dominates the SiP market with a 77% share, with Japan leading at 41%, primarily driven by Sony’s 3D CIS market. North America holds 23%, thanks to contributions from Amkor and Intel, while Europe accounts for 2%.

    From a business model point of view, FC/WB SiP is chiefly driven by OSATs like ASE, Amkor, JCET, TFME, PTI, Huatian, ShunSin, and Inari. TSMC dominates FO SiP with its InFO line, and Sony’s CIS market leads in 2.5D/3D SiP, followed by TSMC with Si interposer, Si bridge, and 3D SoC stacking.

    To maintain competitiveness, companies explore M&As and capacity expansion, offering comprehensive solutions to reduce time-to-market. These trends span various SiP market segments, including IDMs, OSATs, foundries, IC substrate suppliers, and EMS. OSATs, comprising 32% of the SiP market in 2022, focus on full-turnkey solutions and plan to invest in advanced SiP offerings. IDMs, accounting for 48%, develop proprietary packaging technologies, while foundries, mainly TSMC, hold 17% with advanced assembly capabilities. IC substrate suppliers are entering the market, and EMS models are expected to grow, especially in wearables. China’s SiP market presence is expanding, with compatibility and interest in packaging technologies for chiplets and hybrid bonding to enhance competitiveness.

    Acronyms

    • SiP : System-in-Package
    • CAGR : Compound Annual Growth Rate
    • AI : Artificial Intelligence
    • HPC : High Performance Computing
    • IoT : Internet of Things
    • HD FO : High-Density Fan-Out
    • FC/WB : Flip-Chip Wire-Bond
    • CIS : CMOS Image Sensor
    • SoC : System-on-Chip
    • IDM : Integrated Device Manufacturer
    • IC : Integrated Circuit

    Yole Intelligence’s semiconductor packaging team invites you to follow the technologies, related devices, applications, and markets on www.yolegroup.com.

    In this regard, do not miss Yik Yee Tan’spresentation “Global Vehicle Electrification Trends and BEV Opportunities in the Developing World” on November 8 during ISES South East Asia in Penang, Malaysia.

    Yik Yee Tan is a Senior Technology and Market Analyst and part of the packaging team at Yole Intelligence.

    Ask for a meeting with our experts at Yole Group’s booth: events@yolegroup.com.

    Stay tuned!

    The post SiP Market Soars on the Wings of Chiplets and Heterogeneous Integration appeared first on Edge AI and Vision Alliance.

    ]]>