Making the world a happier and better place is becoming a habit of Facebook. The social media giant believe in sharing and helping people connect through its platform. The company is innovating every moment with its offices spread globally and is now digging deeper into machine learning. Facebook uses a lot of artificial intelligence based deep learning to run its service and provide better experience to its users. The recent announcement is more of a blessing for the developers’ world. The knowledge of deep learning and highly sophisticated artificial intelligence tools will be available as open source for anyone to make use of in developing their web services.
The Research Lab at Facebook has been using algorithms along with artificial intelligence to make software that can predict and learn stuff on its own. They can recognize patterns, behaviors and much more. The tools are developed by Facebook and they have been using them successfully and even making them better as a part of their in-house AI project. But now they are donating them as open source code to Torch, which is a deep learning based open source project.
Looking at such a generous move by Facebook, it will help the developers and researchers learn more about the sophisticated technology that comes straight out of its research lab. But as a researcher from Facebook’s very own lab commented that to efficiently make use of these tools you will need to be highly skilled.
The social media company however will gain more from making its AI tools open source. There is a large pool of talented researchers out there, who will use and refine them to a deeper level. These modules will definitely expand the open source computing framework and increase the traction towards artificially intelligent web services.
To conclude, Facebook has increased possibilities and we hope that other innovators will start sharing with the world to make it a better place.
The release includes a number of other CUDA-based modules and containers, including:
- Containers that allow the user to parallelize the training on multiple GPUs using both the data-parallel model (mini-batch split over GPUs), or the model-parallel model (network split over multiple GPUs).
- An optimized Lookup Table that is often used when learning embedding of discrete objects (e.g. words) and neural language models.
- Hierarchical SoftMax module to speed up training over extremely large number of classes.
- Cross-map pooling (sometimes known as MaxOut) often used for certain types of visual and text models.
- A GPU implementation of 1-bit SGD based on the paper by Frank Seide, et al.
- A significantly faster Temporal Convolution layer, which computes the 1-D convolution of an input with a kernel, typically used in ConvNets for speech recognition and natural language applications. Our version improves upon the original Torch implementation by utilizing the same BLAS primitives in a significantly more efficient regime. Observed speedups range from 3x to 10x on a single GPU, depending on the input sizes, kernel sizes, and strides.
What is Torch?
A summary of core features:
- A powerful N-dimensional array.
- Lots of routines for indexing, slicing, transposing.
- Amazing interface to C, via LuaJIT.
- Linear algebra routines.
- Neural network, and energy-based models.
- Numeric optimization routines.
- Fast and efficient GPU support.
- Embeddable, with ports to iOS, Android and FPGA backends.