MediaTek is at the top! Dimensity occupies top position in Zurich AI performance ranking

MediaTek is at the top! Dimensity occupies top position in Zurich AI performance ranking

MediaTek wins the race. According to Moore’s Law, the performance of mobile phones has improved rapidly and the running score is one of the most important reference indicators for measuring the benefits of mobile phone performance. In recent years, with the advent of AI, the number of AI applications on mobile phones has increased, and especially in the trend of photography, video, and games, the AI performance provided by the chip for mobile phones is becoming increasingly important.

As a result, the strength of the AI energy efficiency of the chip and the level of the AI running score of the mobile phone has become important indicators that are relevant to mobile phone manufacturers and consumers.

AI Benchmark v4 Mobile Phone Rankings

The authoritative Zurich AI benchmark has updated the latest version of its AI performance rankings. In the mobile phone ranking, the iQOO Z1, a 5G phone powered by MediaTek Dimensity 1000+, won with 133,000 points. In the mobile chip rankings, two 5G SoCs, the MediaTek Dimensity 1000+ and Dimensity 820, took first and third place respectively.

MediaTek is at the top! Dimensity occupies top position in Zurich AI performance ranking 3

The reason that MediaTek’s Dimensity 5G chip finished more than 13% ahead of the second place and took first and third place is directly related to MediaTek’s proprietary, stand-alone AI processor APU. As early as 2018, MediaTek launched its standalone AI processor APU to build the NeuroPilot artificial intelligence platform, which integrates heterogeneous computations such as CPU, GPU, and APU into the SoC, giving the SoC powerful AI computational capabilities.

MediaTek is at the top! Dimensity occupies top position in Zurich AI performance ranking 2

MediaTek has also worked with AI partners to build an ecosystem that provides system solutions to end manufacturers with computing power, algorithms, and applications. The first-generation APU was integrated into the former 4G mobile phone SoCs to provide full support for AI camera functionality, which is still the mainstream solution for mobile photography today.

By the 5G chip era, MediaTek’s standalone AI processor had been upgraded to APU version 3.0 with a new 6-core architecture and up to 4.5 TOPS AI processing power, more than twice the performance of the previous generation APU 2.0.

Compared to competitive products, the architecture of APU 3.0 is more efficient and capable of maximizing “effective processing power” in all AI scenarios. 3.0 is like a sports car, although its pulling power is not as great as that of a truck, the sports car has greater speed and flexibility, and it can make the most of its horsepower to reach the finish line faster.

This is the task that is the main reason why the Dimensity 1000+ can eliminate the competition and win the Zurich AI Performance Championship. MediaTek’s AI technology has always been very well suited to practical user scenarios such as portrait cutting, shake removal, face detection, video optimization, etc., which is also the direction that government agencies and industry development are taking.

Dimensity 1000+ is equipped with the latest APU 3.0 With support for APU 3.0, Dimensity 1000+ fully meets the performance requirements of 5G flagship phones for a wide range of AI applications such as AI cameras, smart albums, AI videos, games, etc.

MediaTek is at the top! Dimensity occupies top position in Zurich AI performance ranking 1

Currently, MediaTek has already launched a range of Dimensity 5G chips from the Dimensity 1000, Dimensity 800, and Dimensity 700 series, all of which are equipped with an independent AI processor APU, giving excellent AI performance and applications to the entire Dimensity 5G chip series. As more Dimensity 5G devices become widely available, 5G phone users will have the opportunity to experience even richer AI capabilities.

Best artificial intelligence solutions that are revolutionizing future of IT industry

artificial intelligence

Unlocking the enormous potential that artificial intelligence is supposed to offer will shape the future of software development. Strategic business interest in this breakthrough technology is growing day by day, and companies around the world are investing intelligently in AI. As more mature companies define their AI strategies, it is foreseeable that in the coming years, AI tools alone will be the key to success. It can generate trillions of dollars in business value.

Artificial intelligence

Artificial intelligence algorithms and advanced analysis techniques have enormous potential in software development to enable seamless real-time, large-scale decision making. AI applications can perform complex and intelligent functions associated with the human mind. By collecting and analyzing data from a variety of sources, including sensors, remote inputs, and microchips. Analytics introduces the first five artificial intelligence platforms that can transform and automate modern software development.

Google Cloud artificial intelligence Platform

The Google Cloud AI platform provides machine learning, in-depth learning, natural language processing, speech, and vision capabilities for software development in the cloud. These include:

– Machine learning

With the Google Cloud AI platform, which provides an integrated toolchain, software developers can easily machine learn the model and accelerate the development and deployment process.

– Deep learning

The Google Cloud AI platform provides pre-configured virtual machines (VMs) to help software developers create applications for deep learning. You can configure VMs in the Google Cloud, which encapsulates popular AI frameworks. You can use popular, pre-installed AI frameworks (such as sci-kit-learn, TensorFlow, and PyTorch) to launch an instance of the Google Compute Engine.

– Natural Language Processing (NLP)

The Google Cloud AI platform provides NLP features that help software developers understand the structure of a text and what the text is saying. This means that the Google NLP feature is available through the RESTful API of Google NLP. API for text analysis.

 –  words

Google Cloud AI platform uses neural network models for speech-to-text and text-to-speech functions API. The speech-to-text API for converting audio to text supports 120 languages and their variants.

With its speech recognition capabilities, software developers can activate voice commands and controls in their applications. The Text-to-Speech API allows you to create a natural-sounding speech from text, a feature that is ideal for converting text to MP3 or audio files in LINEAR16 format.

 – View

Vision is another key feature of the Google Cloud AI platform that allows software developers to create Smart Insights from images. REST and RPC APIs provide Google Vision AI Vision functions to use these APIs pre-trained ML models to recognize objects and surfaces and use these APIs to read handwritten and printed text.

Why do people focus on deep learning when it comes to artificial intelligence?

There are other areas of AI that look promising, such as Deep Learning. Remember that top companies like Google’s DeepMind and OpenAI are already working on this approach. Making the breakthrough.

Why do people focus on deep learning when it comes to artificial intelligence?

So what is Deep Learning? Well, interestingly, it’s not new. “Deep learning is a classic behavioral phenomenon widely known in the psychological literature since the early 1950s,” says Matt D., says Matt Johnson, Ph. He is a professor of psychology at Haught International Business School and author of Blindsight: the ( Mostly ) Hidden Ways Marketing) author. Transforming our brain. “In its simplest form, the frequency of behavior will rise or fall depending on the immediate consequences of that behavior. This applies to both animal and human behavior.

Future is AI & Deep Learning

However, some of the key principles of Deep learning have been applied to AI models. Fiddler’s head of data science Ankur Taly says: “Deep learning requires action, a stimulus and a payoff”. “An agent, such as a robot or a character, interacts with its environment, observes certain activities and reacts accordingly in order to achieve useful or desirable results”. Deep learning follows a specific approach and determines the best way to achieve the best results. This is very similar to the structure in which we play video games, where the agent makes a series of attempts to obtain the highest score or maximum reward. After many iterations, it learns to maximize its cumulative rewards”.

Machine Learning is changing the world

In fact, some of the most interesting applications for Deep learning can be complex games. Consider the case of DeepMind’s AlphaGo. The system quickly learned how to play Go through Deep learning and beat world champion Lee Sedol 2016 (the game has more action potential than the number of atoms in the universe!)

But of course, there are other applications for the technology than just games. This is why Reinforcement Learning is particularly useful for robotics. OpenAI has, for example, using the technology for a robot arm that is able to solve the magic cube.

Here are some other areas where Deep learning can have an impact:

Entertainment: “The future will consist of the free-form environment that the next generation of ‘movie lovers’. AI-driven characters will work together to adapt to generate detailed storylines, and consumers will no longer rely on fixed conversations rather than rigid interactions between player characters.

Healthcare: “ImagineAI trying to teach doctors on how to treat medical patients through deep learning. Also, AI doctors can try drugs almost at random to see how they work, and over time they should develop patterns and understand which drugs work best in which situations. But we obviously can’t get AI doctors to do these experiments on real patients, and the physiology is too complex to construct Suitable human-computer simulations to conduct virtual experiments.

Copyright notice: This article is reprinted for the purpose of transmitting more information. If the source is incorrectly marked or infringes your legal rights, please contact us here, we will correct and delete it in time, thank you for your support and understanding.

Scientists have developed an artificial eye to act as bionic eye for visually impaired people

bionic eye

Scientists have developed an artificial eye that could enable humanoid robots to see or even act as a bionic eye for visually impaired people in the future.

Researchers at Hong Kong University of Science and Technology have designed the electrochemical eye – nicknamed the EC-Eye – to look like the size and shape of a biological eye, but with much greater potential.

The eye mimics the human iris and retina by using a lens to focus light onto a dense matrix of light-sensitive nanowires. The information is then transmitted to a computer for processing via the wires, which act as the visual cortex of the brain. During the tests, the computer was able to recognize the letters “E”, “I” and “Y” when they were projected onto the lens.

Artificial eye

The artificial eye could theoretically be connected to an optic nerve to transmit information to the human brain, the researchers said, in addition to improving the camera-based eyes currently used in robots.

“The biological eye is probably the most important sensory organ for most animals on this planet,” the researchers wrote in an article describing the discovery.

“Machine vision systems that mimic the human eye are also indispensable in autonomous technologies like robotics. In humanoid robots, in particular, the vision system must resemble that of a human being in order to enable friendly interaction between humans and robots and must have superior device characteristics.”

Scientists have developed an artificial eye to act as bionic eye for visually impaired people

At present, the proof-of-concept device has a low resolution because each of the 100 nanowires used in its construction represents only one pixel. However, researchers have said that through further development, the artificial eye could have even better resolution than the human eye.

Up to 10 times more nanowires than biological photoreceptors could potentially be used, allowing the artificial eye to distinguish between visual light and infrared radiation.

This would enable a user of the human bionic eye to see smaller objects and greater distances and to acquire night vision capabilities.


Microsoft creates supercomputer for smarter Artificial Intelligence

Microsoft creates supercomputer for smarter Artificial Intelligence

Those who don’t remember the scenes from the movie Ruthless Terminator, in which the machines turned against us after a supercomputer was taken out of the way, found themselves further away from becoming reality than they are now.

Microsoft has received a huge supercomputer for its work with artificial intelligence, which is a turning point for its Azure cloud computing service.

This beautiful machine has no less than 285,000 processor cores powered by 10,000 graphics chips for OpenAI. OpenAI is a company that wants to ensure that AI technology helps people.

This nice news was announced by Microsoft at the Build Conference for programmers. And for those who do not know about it, OpenAI is the best way to help people: So far, supercomputers, which are the most powerful computing machines in the world, tend to be used for the most difficult problems.

In other words, they are used for work such as simulating nuclear explosions, predicting the future Earth’s climate, and even finding drugs to combat the coronavirus.

What makes the supercomputers of the others stand out is the gigantic amount of memory and the fast connections between the processors. This means that a supercomputer can concentrate on a more complicated problem than a larger group of smaller, cheaper machines.

Both Microsoft and OpenAI believe that their massive supercomputer will give AI a new sophistication. This partnership, which began last year, involves a billion-dollar investment by Microsoft.

It will be a step forward from supercomputers becoming robots, then I think it will be better to treat the machines well, otherwise, we will really see them turn against us, and the movie Ruthless Terminator and others of the genre will become real. AI is becoming more real.

Source: CNet

Nvidia RTX Voice, to Reduce Background Noise Using Artificial Intelligence

RTX Voice

One of the worst things about answering phone calls and attending work meetings from home these days is the background noise. To prevent this problem, NVIDIA has developed a new free software called RTX Voice that eliminates background noise when you talk, and a beta version is now available online.

Read more

Apple owns AI-Startup Voysis to make Siri more coherent

Apple owns AI-Startup Voysis to make Siri more coherent

Apple is dedicated to making the Siri even better. The new AI (Artificial Intelligence) called “Voysis” is now the property of Apple. Apple recently acquired the Irish company Voysis in Dublin, a start-up company focused on improving natural language processing in virtual assistants, Appleinsider reports in external media.

Apple buys AI startup Voysis to make Siri more comprehensive

Apple owns AI-Startup Voysis to make Siri more coherent

According to Bloomberg, Voysis has developed a natural language processing platform specializing in the development of speech assistants used in online shopping applications. By better understanding, the customer’s natural speech patterns, the embedded technology can deliver more accurate search results.

Apple confirmed the acquisition in a statement saying that it “acquires smaller technology companies from time to time and we usually do not discuss our intentions or plans.

A now-deleted page reported that Voysis was able to narrow down product search results by referring to retail-related phrases such as “I need a new LED TV” and “My budget is $1000. Effective speech processing allows users to interact more naturally with AI speech assistants, removing barriers such as memorizing key command phrases.

The company’s solution is based on WaveNet technology, which was introduced in 2016 as part of Google’s DeepMind project. WaveNets is described as a “Deep Generation Model of Raw Audio Waveforms” that can be used to create a speech that mimics any human voice, providing a more natural virtual assistant experience. It appears that Voysis uses this method to more accurately sample and translate human voice commands from AI systems.

Prior to the acquisition, Voysis marketed its natural language solutions to companies with retail layouts that wanted to improve the integration of voice assistants on the app or Web page. The company reportedly reduced the memory of its solution to 25 MB, allowing for easy integration with existing AI systems.

In 2018, Voysis raised $8 million in venture capital to further develop and launch a refined product, TechCrunch reported at the time.

“Voysis is a complete voice AI platform,” said Peter Cahill, founder, and CEO of Voysis, in a 2018 interview, “What I mean is that this platform allows companies and businesses to quickly build their own AI that can be accessed by voice or text.”

Pohoiki Springs: Intel neuromorphic research system unleashes the power of 100 million neurons

Intel neuromorphic research system unleashes the power of 100 million neurons

Intel announced that it has prepared Pohoiki Springs, its latest and most powerful neuro-morphic research system. And not only that, it has the processing power of a hundred million neurons.

The cloud-based system will be available to members of Intel’s neuromorphic research community (CINI) and will enhance their neuromorphic work to solve larger and more complex problems.

“Pohoiki Springs extends our neuromorphic research capabilities in Loihi by more than 750 times and operates at less than 500 watts. The system allows our researchers to explore ways to accelerate workloads that run slowly on conventional architectures, including high-performance computing systems,” said Mike Davies, director of Intel’s Neuromorphic Computing Laboratory.

Pohoiki Springs is the largest neuromorphic computing system developed by Intel to date. It integrates 768 Loihi neuromorphic research chips into a chassis the size of five standard servers.

The Loihi processors are inspired by the human brain. Like the latter, the Loihi processors can handle demanding workloads up to a thousand times faster and ten thousand times more efficiently than conventional processors.

With 100 million neurons, Pohoiki Springs increases the neural capacity of the Loihi processors to the capacity of a small mammal’s brain, an important step towards supporting much larger and more demanding neuro-morphic workloads. The system lays the foundation for a networked and autonomous future that requires new ways of dynamic real-time data processing.

Meena, the Google chatbot surpasses Siri and Alexa


Google is not far behind in a world where virtual assistants are gaining ground and has launched Meena, its new chatbot, which “talks about … can talk about anything” and arrives to compete with Alexa and Siri.

At present, both Amazon’s and Apple’s virtual assistants are market leaders, but Google is a marginal player in this field because, despite its attempts to enter this world with the arrival of its assistant, it could now have a big advantage over the rest.

Meena, the Google chatbot

According to the source, Meena is a thoroughly trained neural network model that Google has maintained for 30 days with 2,048 tensor processing units, Google’s special AI-specific chip, with over 340 GB of text, or nearly 40 billion words.

In addition, the source mentions that this processing power has given Meena the ability to understand the context of a conversation to the point where she can provide a coherent answer.

How coherent is Meena?

In order to evaluate the behavior offered by Meena, Google has developed a new metric called “Average Sensitivity and Specificity” or SSA that allows the Mountain View company to measure whether a bot’s response is meaningful, i.e., appropriate and more like a human response, and whether that response is specific.

Google AI beats doctors at breast cancer detection
For example, in a conversation like this, where the person says, “I love spy movies,” if the bot were to respond, “Great, tell me more,” that would be a reasonable response, but it’s not specific.

To this exclamation, however, Meena could reply: “Amazing. I like all the spy movies from Mission Impossible. Which one is your favorite?”, a more specific and appropriate answer.

Meena beats the rest

According to Google tests, Meena gave results that are closer to those of humans – how SSA can evaluate humans – this is 86%, i.e. this continuous neural network reached a level closer to humans with 79% SSA, while Mitsuku and Cleverbot reached 56%, DialoGPT 48% and XiaoIce 31%.

Although Meena is already running, Google is keeping this chatbot as a test version, so it will be a long time before Meena is publicly available.

Apple acquires AI based human detection company

The company is the provider of artificial intelligence technology for detecting people from Wyze’s cameras. According to LinkedIn, Xnor was founded in 2017 and has its headquarters in Seattle with 54 permanent employees. Prior to the acquisition, Xnor raised approximately $14.6 million from investors such as the Madrona Venture Group in Seattle.

Apple acquires

Apple acquires artificial intelligence startup

Apple has been using its home kit ecosystem for some time now. Until now, it has done so from a platform and software point of view, without the company having fully engaged in creating its own devices for the intelligent home. Despite the fact that there are no rumors about Apple entering this IoT market, it is part of the alliance with Amazon and Google to boost this sector.

The company has now confirmed the purchase of Seattle-based artificial intelligence company, known as one of the providers of Wyze’s smart camera technology for people recognition related to home use.

The specialty of this start-up company is to run its software on devices with fewer computing resources, rather than in data centers with powerful computers. The products they develop can run on devices such as smartphones, cameras, drones, and even embedded, low-power mobile CPUs.

The purchase is a significant acquisition, since the Company’s advanced camera models, such as the Wyze Cam V2 and Wyze Cam Pan have been utilizing’s people recognition technology since last summer. Of course, this feature is no longer available in the beta version of the camera software, as Apple now owns the company, so Wyze will be without one of its star features.

It is not clear whether Apple’s decision to take this step is to develop advanced features for the smart home, or whether’s technology will be used for other projects of the company, such as improving the recognition of faces and people on the camera, or for the rumored smart car technology.