The processing power at the infrastructure edge is what makes the low latency and high-performance preconditions needed to makeArtificial Intelligence (AI) and Machine Learning (ML) applications work. AI/ML applications are often located at the edge and sends large amounts of processed data from devices to a data center, with high bandwidth and low latency as a common prerequisite.
High-frequency data collected by sensors is more and more often processed with AI/ML with extremely low latency generating immediate insights that can initiate needed actions. The network load is thoroughly reduced as the raw data is not sent — only needed and often enriched data is instead being sent to the cloud for further analysis. Connected devices are becoming less dependent on high-quality network connections. However, even if the vast amounts of raw data is transformed or condensed into insights, the amount of data will become more and more demanding as the number of data collection points grows in an enterprise. This calls for the utilization of smart technology such as the CloudBackend Edge dbPaaS — A smart data management platform designed to make scaling of AI&ML utilization effective and industrialized. In order to train AI and ML models there is an increasing need of processing power. This is predicted to continue for years and demand more localized processing power, data storage, and resources such as data management platforms.
With a micro cloud from CloudBackend integrated in the edge node, innovative functionality is enabled, such as user experience that follows the user between different mobile devices. It also simplifies data synchronization to/from infrastructure, mobile devices or other edge end-points. On the infrastructure side it simplifies distribution of data (enabling or restricting) between geographical areas or parties, and reduces the need of data transfers. The CloudBackend data management API:s are served locally by the micro cloud in the edge node and functions are executed there or in another cloud higher in the topology and acting as the data layer for AI/ML applications processing the data.