This is the second chapter of a three-part series on Evolutionary Feature Selection with Big datasets. We will start where we left off, namely with a review of existing metaheuristics with special focus on Genetic Algorithms.
Spark Streaming is one of the most widely used frameworks for real time processing in the world with Apache Flink, Apache Storm and Kafka Streams. However, when compared to the others, Spark Streaming has more performance problems and its process is through time windows instead of event by event, resulting in delay.
The human brain and our algorithms are hardly alike, as Neuroscience and Deep Learning are quite different disciplines, but some of the concepts still give support to some ideas. In this post, we will talk about one of those ideas: the memory.
One of the most fascinating ideas about Deep Learning is that each layer gets a data representation focused on the objective of the problem to be solved. So, the network as a whole generates an idea of each concept, derived from data.
It all started with a call from Adrián Arias, Enterprise Relationship Manager at LinkedIn: “Congratulations Stratio, you are one of our finalists to the “Premios IN”, nominated in the Best Recruitment Team category, only a 5% of our clients make it to the nomination.”
Ever felt daunted by a new role? Remember that feeling when you have just started at a new company? As Scrum Master, the feeling is still relatively raw… In today’s modern company, it is common to hear conversations about how to define the main responsibilities of a Scrum Master.
Deep learning applications are now truly amazing, ranging from image detection to natural language processing (for example, automatic translation). It gets even more amazing when Deep Learning becomes unsupervised or is able to generate self-representations of the data.