How does digital transformation actually work?

To master digital transformation in your business and put data-driven business models into practice, a digital mindset and comprehensive empowerment originating with corporate management is required.

Read more about our big data services here

Data is stupid; using it is clever

Big data Olympics

Four gold medals and one silver medal during the 2018 Winter Olympics are proof that Jac Orie is a successful speed skating coach. Why? It all has to do with data!

In the ice skating world, the name of Jac Orie is well established. He is the man behind the biggest successes of many Dutch speed skaters. Gerard van Velde in 2002, Marianne Timmer in 2006, Marc Tuitert in 2010 and Stefan Groothuis in 2014: they all won Olympic gold working with Orie. Apart from a mountain of medals, these skaters have left something valuable: a huge amount of data. Advanced analytics on almost two decades worth of data has helped Orie to train his team even more smartly in the run-up to the 2018 Winter Olympic Games in Pyeongchang, South Korea.

Data science

The results of Orie’s big data project have been astounding so far. Millions of viewers all over the world saw Sven Kramer (men’s 5,000 metres), Carlijn Achtereekte (women’s 3,000 metres) and Kjeld Nuis (men’s 1,000 and 1,5000 metres) skating to gold. And Patrick Roest (men’s 5,000 metres) won silver. Less visible is what exactly lies behind these successes. For many years, Orie has been using test data generated by skaters to calculate speed and stamina. For Pyeongchang however, he went one step further and collaborated with Leiden-based data scientist Arno Knobbe.

The big data approach, whereby computing power is used to perform calculations on big volumes of data has led to many useful insights. These include the relation between the type of training and the moment, duration and intensity of the training. A skater who has profited hugely from this is Kjeld Nuis. Data showed that stamina training in the morning proved ineffective for him, leading to an improvement in his training programme – and two gold medals in Pyeongchang.

Supercompensation

For Orie, Knobbe and the skating sport in general, the big data journey is just beginning. For example, the phenomenon of ’supercompensation’ still needs to be figured out. Supercompensation is what happens when an athlete temporarily lowers the training intensity, leading to recovery of the body and an increase in racing performance. Obviously, this effect needs to be timed perfectly in the run-up to an important race. It’s a complex equation, with the results of training sessions sometimes showing up months later and with training types having different effects on performance for sprinting distances (especially the 500 and 1,000 metres), on the one hand, and longer distances (1,500 metres and above), on the other.

Golden opportunities – everywhere

It is certainly not an exaggeration to say that the 2018 Winter Olympics have become the first big data Olympics. As a best practice, the example set by the Dutch skaters will be followed by other athletes looking to optimize their performance. And it’s not just in sporting events that data thinking is making such an impact. Many companies are becoming more data-driven. At

Basefarm, we work together with some of these companies to explore their existing wealth of unexplored data and find new use cases. In the manufacturing, service and maintenance industries, for instance, the use of predictive maintenance saves companies millions of euros every year. And this is only just the beginning. Undoubtedly, big data will shape the next Olympic games as well as the business world of tomorrow. Our question to you: will you be a contender for gold?

About Ronald Tensen

Ronald Tensen is Marketing Manager at Basefarm in the Netherlands. He has a broad experience in the internet and IT industry (B2B and B2C), successful at developing and launching new consumer services and brands, strong customer focus and of course he is a great team player!

Big Data Intro-Webinar!

Watch webinar on demand! Big Data inspiration with Big Data Chief Evangelist, Klaas Bollhöfer!

Big Data has become a buzzword over the last years. It is not just a stand-alone term but rather a combination of many aspects to reveal a whole picture.

You might ask why Basefarm in particular is hosting a webinar about Big Data?
We have been a managed service provider of mission critical solutions for years, and are now expanding our business with our acquisition of the German company “The Unbelievable Machine Company”.

Our Big Data expertise is relevant and interesting for a lot of industries – both in operational, developing and “ideation” perspective.
We have reference cases like Deutsche Post, Gebr. Heinemann, Audi, Deutsche Welle, Delivery Hero, Metro Group and Parship.  Read more about *UM here!

In this session you will get an inspiring intro-webinar where we evolve Big Data possibilities presented by Chief Evangelist, Klaas Bollhöfer.
The webinar is for everyone, and you do not need any knowledge about the topic before the session. The session will be in English.

At the end of this session you will have a fundamental understanding of what Big Data is, the challenges that comes with it, why you should start looking into it in 2018 and last but not least – how you can turn your data into business opportunities.

Big Data Chief Evangelist – Klaas Bollhöfer

Klaas Bollhöfer has acted as the Chief evangelist of The unbelievable Machine Company, a Basefarm company, for more than 5 years now, and is pioneering data science in Germany, Europe and beyond. At the interface of business, IT, artificial intelligence and design he develops cutting-edge strategies, spaces, services, teams and sometimes escape routes, and describes himself as a for-, side- and backward thinker. Besides that he is founder and managing director of Birds on Mars, a Berlin-based consultancy exploring and developing the intersections of human and artificial intelligence. The time left is filled with lightning talks, guest lectures, program committee chairs and craft brewing. Klaas is a certified Scrum master, design thinker, mediator and coach and will never stop being curious.

Where does Big Data begin? – Many perspectives, one classification

Big Data is a buzz phrase that is used in various situations and is constantly developing.

To classify Big Data decisively is not so easy. Firstly, it is not just a stand-alone term but rather a combination of many aspects to reveal a whole picture. And secondly, Big Data is a buzz phrase that is used in various situations and is constantly developing. It is time to set things straight.

Buzz phrase? Collective term? Synonym?

All of the above. Fundamentally, Big Data represents large digital data volumes as well as the capturing, analyzing and evaluating of it. Therefore, Big Data is also the collective term for all digital technologies, architectures, methods and processes that are required for these tasks. Or as Hasso Plattner says: “Big Data is a synonym for large data volumes in a wide range of application areas as well as for the associated challenge of being able to process them.”

Large data volumes?

Very large. “By the year 2003, humans had created a total of 5 trillion gigabytes of data. In 2011 the same amount was created within 48 hours. Now, creating the same data volume requires just 7 minutes,” illustrated RBB Radioeins in simple and effective terms. Driven by the internet, social networks, mobile devices and the Internet of Things, the worldwide digital data volumes will grow another tenfold by 2020. In Germany alone the current figure of 230 billion GB will rise to 1.1 trillion GB.

This is exactly were Big Data comes into play: The huge data volumes are checked for relationships using a such algorithm, and the whole process requires a combination of several disciplines. “It ranges from traditional informatics and data science to interface design. Machine learningdeep learning and artificial intelligence to mathematics, statistics and data interfaces,” explains Florian Dohmann, Senior Data Scientists at The unbelievable Machine Company. “A lot of this is nothing new, but combining them all creates the basis for new opportunities.”

So it is only about data volumes?

Fundamentally, yes. Big Data is firstly defined by data volumes that are “too large, too complex, change too quickly or are structured too weakly to be analyzed with manual and traditional data processing methods,” according to Wikipedia. But to define where Big Data begins – i.e. from which point the targeted use of data becomes a Big Data project – you need to take a close look at the details.

Ready to deep dive into the data lake?

Think data lakes are just a new incarnation of data warehouses? Our resident expert Ingo Steins rates the two.

Data lakes and data warehouses only have one thing in common, and that is the fact that they are both designed to store data. Apart from that, the systems have fundamentally different applications and offer different options to users.

A data lake is a storage system or repository that gathers together enormous volumes of unstructured raw data. Like a lake, the system is fed by many different sources and data flows. Data lakes allow you to store vast quantities of highly diverse data and use it for big data analysis.

A data warehouse is a central repository for company management, so it’s quite different. Its primary role is as a component of business intelligence: it stores figures for use in process optimization planning, or for determining the strategic direction of the company. It also supports business reporting, so the data it contains must all be structured and in the same format.

Challenges with data warehouses

Data warehouses aren’t actually designed for large-scale data analysis, and when used in this way these systems will reach their structural and capacity limits very quickly. We now generate enormous volumes of unstructured data which needs to be processed quickly.

Another limitation is the fact that high-quality analyses now draw on a variety of different data sources in different formats, including social media, weblogs, sensors and mobile technology.

A data warehouse can be very expensive. Large providers such as SAP, Microsoft and Oracle offer various data warehouse models, but you generally need relatively new hardware and people with the expertise to manage the systems.

Data warehouses also suffer from performance weaknesses. Their loading processes are complex and take hours, the implementation of changes is a slow and laborious process, and there are several steps to go through before you can generate even a simple analysis or report.

Virtually limitless data lakes

Data lakes, on the other hand, are virtually limitless. They aren’t products in the same way that data warehouses are, but are more of a concept that is put together individually and can be expanded infinitely.

Data lakes can store infinite different data formats in very high volumes for indefinite periods of time. Because they are built using standard software, the memory is comparatively cost-effective too.

Data lakes can store huge volumes of data, but need no complex formatting or maintenance. The system doesn’t impose any limits on processes or processing speeds – in fact, it actually opens up new ways to exploit the data you have, and can therefore help companies more generally in the process of digitalization.

Put on your swim suit

All you really need to start a data lake is a suitable database. This is relatively easy to set up with a solution like Hadoop. Companies who want to access a wide range of data and process it effectively in real time to answer highly specialized and complex questions will find that the data lake is the perfect infrastructure to realize this goal.

Ingo Steins

Ingo Steins is Unbelievable Machine’s Deputy Director of Operations, heading up the applications division from our base in Berlin. He has years of experience in software, data development and managing large teams, and now runs three such teams distributed across our sites. Ingo joined The Unbelievable Machine Company in January 2016.

Ingo Steins, Deputy Director of Operations, The Unbelievable Machine Company, part of Basefarm Group since June 2017

Data thinking is the holy grail of organic growth

Where does success come from? Nowadays, data thinking is a key component. It’s the culture that is responsible for SpaceX’s pioneering Falcon Heavy rocket launch as well as the secret behind hotels and bars remembering your favorite drink.

If there is anything that drives the most successful businesses right now, it is the clever use of data. Seen in this light, the acquisition by Basefarm of the Berlin-based The Unbelievable Machine Company (*um), the leading service provider for big data, cloud and managed cloud services in Germany and Austria, comes at exactly the right moment.

”Many of our customers are huge data owners. Data is the asset of the future,” explains Stefan Månsby, Senior Director of Product Management & Big Data at Basefarm. “European companies need to catch up with their North American counterparts. The big boys in Silicon Valley, such as Amazon and Google, are leading the race and there is nothing wrong with that. But some parts of Europe lag almost a decade behind when it comes to big data maturity. This needs to change.”

Great data leads to great ideas

Amongst many other industries, airlines and leisure companies will benefit greatly from having a 360-degree view of the customer. By gaining insight into customer behavior and needs, they can turn the customer’s next flight or stay into a ”super-tailored experience” because they already know the customer’s exact preferences. Even a result as simple as having your favorite drink waiting for you when you arrive at a hotel can make a big difference. But how do you get there as a company? You have to concentrate on data first, by putting all your data in one place.

”The first thing we recommend is what we call ’data thinking’,” says Månsby. ” You provide the essential hard data so a company can make necessary decisions “Part of this is data science. You test hypotheses and either they make sense and get you the revenue, or they are a bad idea but you learn from it. By investing in such an agile culture, you can set yourself apart from your competitors and gain a market advantage. Focus on the idea of what you would like to do, not how you will technically solve it. The idea will make your business unique and a leader, not the technology.”

Elon Musk: solar panels, batteries, cars and rockets

A big difference between traditional business and business that relies on data thinking lies in the way they evolve. With the latter, this is far from linear. An example is a company that builds self-driving buses. Their core business is to make such vehicles but, once the buses are driving around in cities, the company can start a side business in traffic reports based on the data they have collected. The new revenue streams could potentially even make public transport free for passengers.

”Data thinking enables new opportunities,” Månsby says. “Look at Ikea. Data thinking has made it Sweden’s second largest food exporter. Another example is Tesla, their mission is to accelerate the world’s transition to sustainable energy. Hence, they need to develop the ultimate battery and then apply them in great cars to prove their point. That’s amazing. As a data-thinking company, you have a big advantage over linear competitors.”

Do you want to know more:

Here you find our Data Thinking webinar recordings about AI: https://www.basefarm.com/en/services/big-data

About the Author

Stefan Månsby is Senior Director of Product Management & Big Data at Basefarm. He has a broad experience in the IT industry and has driven change in many organizations throughout the years. His main passion is digital innovation and he is a great photographer and music producer.

Don’t let big data turn us into Big Brothers

What if Big Brother had access to big data technology? Big data handled without care might easily turn us all unknowingly into Big Brothers or their collaborators.

Although some might see it differently, the society foreseen and described by George Orwell in the dystopian novel 1984 has not become reality. But what if Big Brother had access to big data technology? Big data handled without care might easily turn us all unknowingly into Big Brothers or their collaborators.

Big Brother is symbolic of the totalitarian state Oceania where every citizen was under constant surveillance by the authorities.

Digital computers just recently were invented when 1984 was published in 1949 and were hardly known to people in general. Computers and big data play no real role in the novel, but it is easy to imagine what Big Brother could have done if they were able to capture, curate, manage and process huge volumes of data.

Big Brother better off with computers

Without doubt, Big Brother would be far better off with big data capabilities. They had the manpower but not the necessary data tools.

Today we have those tools. We do not even need the manpower Big Brother possessed. Automatic capture and processing of tremendous amounts of data can be handled by computers, machine learning and Artificial Intelligence, assisted by a few people.

Read more: the Basefarm & *um big data definition.

What we define as big data lakes are key and the resource for big data analyses. The data lake sources can be multiple and the potential results from data analyses can be very interesting.

Huge big data potential

By looking into big data, we can reveal customers and entire societies, including new ways of services distribution and even entire new business models. Basefarm is capable of providing these kinds of analyses.

The view of some big data evangelists is that companies possessing big data capacities might be in position to redefine their entire business. For instance, logistic companies produce enormous amounts of data. Evangelists suggest that these data represent such value that utilizing the data can be core for such companies in the future, not the original logistics business.

Will they also become Big Brothers?

Avoid the dark path

Unless companies are careful, that might very well be the outcome. The path to Big Brother status starts with what data you collect in in the big data lake.

Security and compliance are an integrated part of daily Basefarm operations. The value of this knowledge is even higher in the new world of increasing capabilities to collect, curate, communicate and move data.

Without compliance work, we can all easily step over the threshold and become something far from our intentions.

The Ministry of Love wants your logs

An example of the road to becoming Big Brother is how we handle logs. Infrastructure and application logs are true big data sources. In Basefarm we are enthusiastic about the opportunities provided by these sources. So much information is available to increase production and the customer experience, even leading to completely new ways of serving our customers.

However, logs contain geographical distribution, processing and personal data which are regulated in GDPR. Not least, a big topic is who can access the data. If the logs contain information about personal health, maybe only medical doctors or psychologists are allowed to access them. You definitely don’t want them falling into the hands of Big Brother.

The log data example is interesting. For all Basefarm knows, IT staff might already be unknowingly handling logs in a way that does not comply with regulations.

Unknowing collaborator

No sane person would like to be associated with Big Brother. We do not want to contribute to others becoming Big Brother, and definitely not by them using our data.

To avoid this we need comprehensive control of our data. We need to control where it is, what it contains, who has access and how it is shared with governments, service partners and companies which provide big data services like Basefarm.

The key to avoiding collaborating with Big Brother is to handle your data correctly. It is a priceless asset for your business and you don’t want it falling into the wrong hands.Don’t risk becoming a Big Brother. Through compliance with GDPR and other regulations we can derive huge value from big data.

This might also be interesting

Download Data Thinking whitepaper

Data Thinking addresses the subject areas of our time: data, algorithms, compute and mindset. To comprehensively support companies during times of great complexity and to supervise them with their own digital development.
Learn how you can benefits from Data Thinking.

Watch recording from our GDPR Webinar

May 25 is coming soon: do you know all the responsibilities of data controllers and processers? Listen to our guide and learn who does what!