WHY A3CUBE - A3Cube Inc

Go to content

Main menu:

WHY A3CUBE

COMPANY > COMPANY INFORMATION
 
Why A3Cube
 
From the very beginning A3C focused on providing the best solution possible to store, manage, exchange and analyze huge amount of data. For this reason, the increasing number of critical data-driven applications and the increasing amount of data these applications have to manage are strongly supporting the growth of the Company. Data mining, Machine Learning and Artificial Intelligence represent the best known examples of such applications and give a precise idea of where the market is going. The new imperative has become to generate value from data and to be able to take smart actions from them. To accomplish that, data must be analyzed with new tools and in new technological environments increasingly ruled by machine-to-machine communication, distributed storage, cloud computing and networks of data-gathering sensors.

The company, specifically, decided to develop products for high performance Data Mining, Machine Learning and Artificial Intelligence, driven by the consideration that high performance data analysis and data driven computation will be the most important topics in the next years.

  • Data mining (sometimes-called data or knowledge discovery) is the process of analyzing data from different perspectives, establishing relationships and summarizing them into useful information.
  • Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction.
  • Machine Learning (ML) is a type of Artificial Intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine Learning (ML) focuses on the development of computer programs that can change when exposed to new data.

All these applications are based on massive parallelism and require a supercomputer approach.
Today supercomputers are optimized for number crunching and not for data oriented operations.

Datacenters, on the other hands, are far away to be useful to address efficiently this kind of problems, simply because datacenters are not designed for intensive supercomputing style workloads. The ubiquitous presence of SW layers for virtualization of the underlying HW infrastructure makes things even worse.   Just as an example, data analytics require a low latency parallel file system distributed among the computational units to permit the fastest access to the data itself from each single node. Today, enterprise storage systems are not designed to efficiently solve this problem. Datacenter distributed file systems, like the very well-known HDFS, are not designed for performance. The result is an inefficient I/O performance that dramatically affects the performance of the applications.

Talking about AI and ML, we can say that they are the fastest-growing fields in today market. Today's most advanced ML and AI solutions rely almost exclusively on GPU-accelerated computing to train and speed up challenging applications such as image, handwriting, and voice identification.  However there are many limitations in standard GPU implementations. For example, many applications involve the use of multi-GPUs (e.g. convolute neural network frameworks) but with the limitation of a single server to manage and host them.
This is a very big constraint because the most advanced (and expensive!) servers specialized for multi-GPUs cannot host more than 8-16 GPUs and if you want to scale among other server you need deep application modifications, being aware that, in any case, performance will be severely affected by many factors starting from the overall clustering architecture.
On the contrary, applications are not limited to 8-16 GPUs; for example, there are many Machine Learning problems that can take advantage of tens to hundreds to over a thousand GPUs.  Another big problem is that the utilization of GPUs is limited by the impossibility to share the idle resource by other computational tasks that require GPU acceleration.

For example, if a computational task running in one GPU is using 1/10 of the resource of the GPU, and there is a second task that needs to use the GPU acceleration, this second task cannot access to the 90% of the remaining GPU resources because the GPU is already locked by the first task and cannot be shared. Last but not least when you deal with distributed systems you need to take into consideration the latency of the communication between the servers to make local performance (inside one server) identical to remote performance (Server to server).
This latency problem is the most limiting factor in scalability and efficiency of the applications that run on distributed systems and limits the possibility to share efficiently resources across different servers.
A3C Technology is the solution to overcome all these limits.

The Company pioneers the transformation from the high-performance computing systems to highperformance data systems and strongly believes in the importance of providing better equipment to the community to extol the maximum value from all the data collected.
A3Cube proposition differs markedly from what is currently available on the market worldwide and addresses the most challenges problems with uniquely optimized systems that have no comparison in today’s market proposal.


Back to content | Back to main menu