The company seems to take a liberal view of what is and what is not price sensitive information. If price sensitivity correlates with the excitement generated on social media and other stock chat platforms, BrainChip has work to do.
Shares began to climb late last year after it was announced in November that it had signed a licensing agreement with Japanese semiconductor maker MegaChips.
The four-year agreement grants MegaChips a worldwide non-exclusive intellectual property license for use in the design and manufacture of its Akida technology in external customer systems.
Mercedes’ decision to use BrainChip’s Akida processor in the EQXX became public knowledge a week ago. The headline has risen 42 percent since then.
On Monday, BrainChip said US customer Information Systems Laboratories was developing an AI-based radar research solution for the Air Force Research Laboratory based on its Akida â¢ neural network processor.
Notwithstanding the company’s seemingly loose interpretation of continuous disclosure obligations, it is clearly a tech title to watch in 2022, given that it gets commercial approval and operates in one of the most important areas. promising artificial intelligence.
In AI, there are three classes of machine learning: supervised learning, unsupervised learning, and reinforcement learning.
When experts talk about machine learning, they usually mean it from a supervised learning perspective.
If you want to predict someone’s score on the exam, you can ask things like how many hours you studied or how many hours you slept and then you can analyze that to get a feel for what might. be the notes.
To represent this in machine learning, data is expressed in columns, with each of the columns in the table representing different characteristics or attributes.
The mathematical function that turns this into the probable test score is called matrix multiplications, with certain weights assigned to each characteristic in the table.
More weight would be placed on time spent studying and less weight on student sleep time.
A graphics processing unit (GPU) performs matrix multiplication very well. They have a slower processing speed, but can do things in parallel very quickly.
BrainChip, Intel, and IBM have found more efficient ways to design machine learning models using event-based sensors, which will become ubiquitous as the global economy moves to the Internet of Things.
Event-based approach to treatment
When applying machine learning to someone playing soccer, classic machine learning would be to process all the information around the ball, such as grass, sky, and other factors.
An event-based approach to processing saves energy as it only focuses on moving parts, such as the bale.
Today, most machine learning processes rely on convolutional neural networks, which look like a moving window that slides through the matrix. Essentially, it finds patterns that are spatially correlated.
The BrainChip processor operates on what is called a spike neural network, which only processes “events” or “spikes” that indicate useful information. This approach, similar to how the human brain works, is not effectively represented in GPUs.
According to van der Made, Intel and IBM test chips, including Loihi, Loihi2, and Truenorth, are not comparable to BrainChip’s AKD1000 chip.
He says IBM’s Truenorth has no learning-on-a-chip, is a “very big” and is unprofitable.
Intel’s Loihi chip is comparable in size to the AKD1000, but is manufactured in an expensive 7nm process, while the AKD1000 BrainChip uses standard 28nm manufacturing technology, according to van der Made.
âAKD1000 has convolution and on-chip learning and can be simply configured using standard TensorFlow tools,â he says.
âThe AKD1000 is in production and has many sample applications for vision, speech recognition, keyword recognition, and odor and taste classification. “