2018: Human and Machine Are Getting Readier to Solve the Problems

Wearable Technology Trend in Indonesia
November 28, 2017
Have You Met Your Data Scientist and Data Engineers?
February 26, 2018
Show all

N owadays, more industries started to recognize the benefit of big data analytics that offers the patron system for a larger workload of analytics.
This relates to the abundant volume of big data collected in the past 2 years behind which counted 90% of what we are having now. (IBM Cloud Marketing, 2017)

In 2017, the spark of Big Data has continued to rise by the systems that support the large volumes of both structured and unstructured data. Due to precisely in the earlier year, various industries have begun storing, processing, and extracting value from data of all sizes and forms. Now, you might figure out that numbers of 90% happened! We began to get a solid definition of what the Internet of Things (IoT) is all about and understand how (big) data is its lifeblood. Further, the enterprise of the IT got more serious about how to engage the Artificial Intelligence through machine learning which makes computer becomes less artificial but a lot more intelligent.

In 2018, Big Data doesn’t only become a choice but a ticket to the throne. The awareness of enterprises over big data has caught them to collect more a stockpile of unconnected and unstructured data in various industries, from the industrial, healthcare, government, agriculture and many more. This, however, cannot be avoided to bombard the storage room.  Experts predict the amount of data generated annually to increase 4300% by 2020 and enterprises are responsible for storing 80% of this data (Big Data Universe Beginning to Explode. CSC, 2012).

So, how we can manage this huge number of data? How does a term of big data become so attracted to the industries? How can such a machine be able to suggest business decision? All the answers lead to the utilization of machine learning (part of artificial intelligence) over the management of big data.

Since it was first coined back in 1956 by Dartmouth Professor, John MacCarthy, Machine Learning was proposed in the analogy of having a baby to be educated with things he will do in the future. The good news is, by the more data we have, the more capable the machine can do by training itself to become smarter one.

Big data means huge number of data that is not only limited to its size, but also its forms.
 Since the development of IoT, we finally have many sources to obtain data such as from sensors, wearables, phones, apps, connected devices and more gathering a veritable tidal wave of additional information for organizations to interrogate. The data, however, comes in two forms which are unstructured and structured of data.

Structured data is anything that fits in a relational database that exists within a certain set of values witihin a specific set of characteristics. Unstructured data, however, is everything that does not fit with relational databases. This includes presentations, videos, records, social media, RSS, documents, and text. Since the dawn of IT, structured data has been the main resource to analyze.

Machine learning is valuable for the analysis of structured data, but necesary when it comes to its unstructured counterpart because of the differences in scale. A human simply cannot compute that amount of data. You know the picture.

In 2018, the emergence of ready-to-use machine-learning tools and models continue to be a reason to get excited about Big Data. Data scientists can benefit from a growing number of open-source machine-learning frameworks including Google’s TensorFlow, Apache MXNet, Facebook Caffe2, and Microsoft Cognitive Toolkit, and others. Therefore, the task of building models has never been easier. For example, Amazon Web Services (AWS) provides deep learning AMIs (Amazon Machine Images) with the leading ML frameworks which is ready to use on the AWS cloud. Also, Google’s TensorFlow Playground provides users to learn more about the neural networks as the foundation of machine learning frameworks by using simple data sets and pre-trained models.

What does this mean? This means that we are starting to have the ability to reduce the time in training the machine by utilizing production-ready machine learning tools or an open source.  Human and machines are getting readier to solve the problems in the world. That what are we aiming to discover more possibilities in society by unlocking the value of the data!

Leave a Reply

Your email address will not be published. Required fields are marked *