|» General Information|
|» Partner profiles|
|» White papers of partners|
|» Partner news|
|» Signing up as a partner|
University Big Data: 23 juni 2011
Build a compute cluster on the spot, install Hadoop and crunch gigabytes of data all in one evening. In this university session you will be introduced to the core concepts of Hadoop MapReduce and the Hadoop Filesystem (HDFS) in a brief presentation. Afterwards we will get hands-on and build a real Hadoop cluster consisting of laptop brought by participants. The session will include installation and setup of the Hadoop software and loading it with a real dataset and running a prefabricated MapReduce job to process the data and see the dynamics of a running cluster.
Participants are encouraged to bring laptops. More is better. Please indicate in your registration whether you will bring a laptop to participate in the cluster!
Max class size: 20
Prerequisites (If you bring a laptop):
- Run OS X or Linux (VMs will do, but please have it pre-installed with a bare Linux and give it a lot of RAM)
- At least 2GB RAM
- Wired 1GB ethernet port
- Moderate amount of free disk space
- Be able to switch off any firewall
- Know how to set a static IP address for your laptop
- Know how to edit your /etc/hosts file or whatever needs doing to make you lookup daemon behave as required
18.30 Presentation on Hadoop and MapReduce essentials
19.15 Buid a cluster and run jobs!
22.00 The end
Friso is Xebia's principal in the Netherlands on all things NoSQL, focussing on Hadoop and HBase for handling of substantial amounts of data. Friso has a history of dealing with architecture to achieve sufficiently scalable, performant and above all working software. He has more than ten years behind the keyboard to tell and educate about.
Friso has substantial real life experience running Hadoop in production and solving data crunching problems using MapReduce. He has organized several workshops about Hadoop and NoSQL and occasionally speaks on the topic of big data and NoSQL.