In the last few years, we have witnesses a massive change in how data is generated, processed, and further leveraged to garner additional value and intelligence, all influenced by the emergence of new computational models based on deep learning and neutral network applications. This profound change started in the data center where deep learn in techniques were used to offer insights into vast data volumes, mostly to classify and/or recognize images, enable natural language or speech processing, or understand, generate, or successfully learn how to play complex strategy games. The change has also brought a wave of more power-efficient compute devices (based on GPGPUs and FPGAs) created specifically for these classes of problems, and later included fully customized ASIcs further accelerating and increasing the compute capabilities of these dep learning-based systems.
Big data applications use specialty GPGPU, FPGA and ASIC processors to analyze large datasets with deep learning techniques, and unmask trends, patterns and associations, enabling image recognition, among others. As such, big data is based on information largely from the past, or rested data that typically resides in a cloud. A frequent outcome of a data analysis is a “trained” neural network capable of executing a specific task, such as recognizing and tagging all faces in an image or video sequence. Voice recognition also demonstrates the power of the neural network.
To read more, please visit: http://bit.ly/2lB86jU