Apache SINGA, an Open-Source library for distributed and scalable Machine Learning applications, has left the incubator of the Apache Software Foundation (ASF). The new Top-Level project already has a few years on the hump, but thematically as up to date like never before…
The project SINGA was launched in 2014 at the National University of Singapore to life in March 2015 Apache SINGA was located in the incubator, the ASF and has been tested on heart and kidneys, and, finally, further developed, and in October of this year ends. Yesterday, the project left the Apache incubator and is since then as a new Top-Level-project is now part of the Apache Software Foundation. An event that also Wei Wang, Vice-President of the project and an assistant Professor at the National University of Singapore very pleased:
We are excited that SINGA has graduated from the Apache Incubator. The SINGA project started at the National University of Singapore, in collaboration with Zhejiang University, focusing on scalable distributed deep learning. In addition to scalability, during the incubation process, built multiple versions to improve the project's usability and efficiency. Incubating SINGA at the ASF brought opportunities to collaborate, grew our community, standardize the development process, and more.
Apache SINGA – what's in The box
In Apache SINGA it is, as already mentioned, distributed and scalable Machine-Learning applications and models. The Software Stack consists of three components: the Core, IO and Model. The core memory management and Tensor operations available in. IO also provides classes for reading and Writing data from / to disks and networks. The model component contains the data structures and algorithms for Machine Learning models. Are meant by the Latter, such Layer models based on Neural networks, as well as Optimizer, Initializer, or metrics, for conventional Machine Learning models.
The architecture of Apache SINGA / © Copyright 2019 The Apache Software Foundation
The library is to give developers the possibility of large Machine-Learning models distributed over an entire Cluster of machines to train. The focus is very compute-intensive Deep-Learning-models are. For the scaling, and the performance of the Whole, the Library contains a variety of optimization options with regard to the communication, memory usage, and synchronization.
the founder and Initiator of the project, Beng Chin Ooi, emphasized in the Pressemitteilunghow important it was to scale Deep Learning in the sense of Distributed Computing. On a single GPU, the Training of Deep Learning models, hundreds of days could have a claim, since these are particularly large, and on the basis of huge amounts of data to be trained. Apache SINGA aims to bring Deep Learning in new and powerful worlds.