In chapter 4 of the Hadoop course we set up a new Virtual Machine, running Linux. This step is not needed if you're already using Linux or a Mac to run the course, but is needed for Windows users.
In the course we install the Java JDK version 7. However this is no longer available from the repositories, and you'll now need to use Java version 8. We've tested the course with this version of Java and are not aware of any issues.
To install Java version 8 issue the following command:
sudo apt-get install openjdk-8-jre-headless
instead of
sudo apt-get install openjdk-7-jre
Sunday, 7 May 2017
Thursday, 27 April 2017
Errata - Java EE Module 1, Chapter 6
We've recently been made aware of a small issue which affects chapter 6 (the CDI chapter) of Java EE with Wildfly, Module 1. You might experience this issue if you're using some versions of Java 8 to create your project - we're aware it is a problem in Java 8.60 and above.
In the video we talk about the different ways to tell JavaEE which implementation of a particular interface should be injected at runtime, when multiple implementations exist in your project. We first demonstrate the @Default and @Alternative annotations, then we look at specifying the required implementation in beans.xml, and finally we discuss qualifiers, which allow us to specify a different implementation in one specific area of our code.
In the video, we end up with all 3 methods in our project in use at the same time, and this worked fine at the time of recording. However we have found that when you then add in your own custom annotations (such as @ProductionDao - the example we use in the video) the deployment might fail. It appears that there is a bug in Java 8 which means that you can't use custom annotations together with beans.xml.
So you are fine to use @Default and custom annotations together, but not beans.xml and custom annotations. As most users will agree that using @Default and @Alternative are much easier than editing the xml file, this probably won't cause much difficulty in practice, but if you are getting a message that the build has failed, this will be why. If you're following along with the chapter, simply remove beans.xml from your project, and you will be able to continue with no further issues.
In the video we talk about the different ways to tell JavaEE which implementation of a particular interface should be injected at runtime, when multiple implementations exist in your project. We first demonstrate the @Default and @Alternative annotations, then we look at specifying the required implementation in beans.xml, and finally we discuss qualifiers, which allow us to specify a different implementation in one specific area of our code.
In the video, we end up with all 3 methods in our project in use at the same time, and this worked fine at the time of recording. However we have found that when you then add in your own custom annotations (such as @ProductionDao - the example we use in the video) the deployment might fail. It appears that there is a bug in Java 8 which means that you can't use custom annotations together with beans.xml.
So you are fine to use @Default and custom annotations together, but not beans.xml and custom annotations. As most users will agree that using @Default and @Alternative are much easier than editing the xml file, this probably won't cause much difficulty in practice, but if you are getting a message that the build has failed, this will be why. If you're following along with the chapter, simply remove beans.xml from your project, and you will be able to continue with no further issues.
Sunday, 5 March 2017
Why I have never attempted to teach JavaScript...
It must be time for the "which is the best programming language" debate again... Here's an interesting article that claims "Jobs-wise, you’d be hard-pressed to find a better language than Java as your primary programming language" - that does seem to reflect the reality we see... actually the comments are even more fascinating than the article!
http://www.theregister.co.uk/…/03/03/pizza_roaches_and_java/
http://www.theregister.co.uk/…/03/03/pizza_roaches_and_java/
Monday, 17 October 2016
Unsatisfied Dependencies in Spring Boot 1.4?
My colleague, Richard Chesterwood has just posted on his blog about a problem with Spring Boot 1.4... if you're getting an issue with unsatisfied dependencies, check it out:
https://richardchesterwood.blogspot.co.uk/2016/10/spring-boot-crashing-due-to-unsatisfied.html
https://richardchesterwood.blogspot.co.uk/2016/10/spring-boot-crashing-due-to-unsatisfied.html
Tuesday, 23 August 2016
Tomcat problems with Java 8
If you're doing any of our courses that use Tomcat, be aware that the latest update to Java 8 (1.8.0_91) seems to have broken JSP compilations for all versions of Tomcat up to and including 8. We're not sure why this is happening, but as a quick fix either use Java 1.8.0_77 or earlier, OR use Tomcat 9 which is confirmed to fully support Java 8.
(Note that Tomcat 9 is still in Alpha, so doing this carries some risk - the safest choice is to use an earlier Java).
Thanks to all those who have reported this, and you can also follow a Stackoverflow post at http://stackoverflow.com/…/spring-mvc-unable-to-compile-cla…
Thursday, 26 March 2015
Java Advanced Course now live!
Today's an exciting day - we've just put the Java : Advanced Topics course live on the Virtual Pair Programmers' website.
I'm really pleased with this course - I think it is going to be really helpful to lots of Java developers - it covers topics which you don't tend to learn about in most Java courses as they are that bit more advanced, but are vital for really good Java developers to know about
For example, we go into depth on how the LinkedHashMap actually works, what can go wrong when you're writing multi-threaded apps, and how to avoid it, and even how to load-test your application so that you can be sure it won't run out of memory when you put it onto the production server!
I hope you enjoy it!
I'm really pleased with this course - I think it is going to be really helpful to lots of Java developers - it covers topics which you don't tend to learn about in most Java courses as they are that bit more advanced, but are vital for really good Java developers to know about
For example, we go into depth on how the LinkedHashMap actually works, what can go wrong when you're writing multi-threaded apps, and how to avoid it, and even how to load-test your application so that you can be sure it won't run out of memory when you put it onto the production server!
I hope you enjoy it!
Monday, 9 February 2015
An update on Hadoop Versions
Our popular Hadoop for Java Developers course was recorded using version 2.4.0 of Hadoop. Since the course was released there have been some further releases of Hadoop, with the current version being 2.6.0.
There are no differences in the content that we cover on the course between the two versions of Hadoop, so the course is completely valid if you wish to use 2.6.0 or 2.4.0. In this blog post, however, I want to point out a reason to stick with version 2.4.0, and a couple of pointers that you should be aware of if you are going to use 2.6.0. I'll also mention the process to upgrade from 2.4.0 to 2.6.0.
Which Version of Hadoop should I use?
If you're starting to develop with Hadoop today then you might just want to download the latest version from the Hadoop website (2.6.0) and there is only really one reason that I can think of not to do this... and that is that Amazon's Elastic Map Reduce (EMR) service, which can be used to run Hadoop jobs "in the cloud" is not yet compliant with versions of Hadoop newer than 2.4.0.
Although the code that you'll write on the course is identical in both versions of Hadoop, if you compile your code with the 2.6.0 jar files you'll not be able to run it on EMR. For this reason we suggest you consider sticking with 2.4.0, at least while learning Hadoop, so that you can experience EMR (we cover how to set up and run an EMR job on the course). If you plan to use Hadoop on EMR in a production scenario then you must stick to 2.4.0 until Amazon update the EMR service to work with newer versions.
You can download a copy of version 2.4.0 from this link.
If I am going to use 2.6.0, what do I need to know?
The only things to be aware of if you wish to study the course with version 2.6.0 of Hadoop are:
(1) Your standard installation path will be opt/hadoop-2.6.0/ instead of /opt/hadoop-2.4.0/ so you'll want to change the references to that in the following two script files that are provided with the course:
startHadoopPseudo
startHadoopStandalone
(2) When you install hadoop, you'll edit either .bashrc or .profile - make sure you also put the reference to the correct folder name in here also. Also, you'll be creating symbolic links to the Hadoop configurations - again make sure you use the correct folder names when you set these up.
What happens if I want to upgrade from 2.4.0 to 2.6.0?
If you have been running with 2.4.0 and wish to upgrade to 2.6.0, you just need to do the following:
(1) Download and unpack the 2.6.0 files from the Hadoop website - place these in /opt/hadoop-2.6.0/
(2) Create the configuration folders under /opt/hadoop-2.6.0/etc as you did for Hadoop 2.4.0 (you can actually copy the configuration folders from your 2.4.0 installation as they'll be valid for 2.6.0)
(3) edit your .bashrc (linux) or .bash_profile (Mac) to change the location of the Hadoop files in the HADOOP_PREFIX and PATH variables from 2.4.0 to 2.6.0
(4) Close your terminal window and open a new one to ensure that the updated environment variables and path varaible are loaded.
(5) run the script resetHDFS - you must be in the Scripts directory to run this script - this will reformat the HDFS file system and will create the symbolic links needed to use the Pseudo configuration. After running this script, enter the JPS command and check that you have the various daemons running (namenode, datanode etc)
(6) Your code, compiled with 2.4.0 will work in 2.6.0 - if you wish to recompile with 2.6.0, remove all the Hadoop jar files from the build path, and then re-add them from the folders under /opt/hadoop/2.6.0/share/hadoop
There are no differences in the content that we cover on the course between the two versions of Hadoop, so the course is completely valid if you wish to use 2.6.0 or 2.4.0. In this blog post, however, I want to point out a reason to stick with version 2.4.0, and a couple of pointers that you should be aware of if you are going to use 2.6.0. I'll also mention the process to upgrade from 2.4.0 to 2.6.0.
Which Version of Hadoop should I use?
If you're starting to develop with Hadoop today then you might just want to download the latest version from the Hadoop website (2.6.0) and there is only really one reason that I can think of not to do this... and that is that Amazon's Elastic Map Reduce (EMR) service, which can be used to run Hadoop jobs "in the cloud" is not yet compliant with versions of Hadoop newer than 2.4.0.
Although the code that you'll write on the course is identical in both versions of Hadoop, if you compile your code with the 2.6.0 jar files you'll not be able to run it on EMR. For this reason we suggest you consider sticking with 2.4.0, at least while learning Hadoop, so that you can experience EMR (we cover how to set up and run an EMR job on the course). If you plan to use Hadoop on EMR in a production scenario then you must stick to 2.4.0 until Amazon update the EMR service to work with newer versions.
You can download a copy of version 2.4.0 from this link.
If I am going to use 2.6.0, what do I need to know?
The only things to be aware of if you wish to study the course with version 2.6.0 of Hadoop are:
(1) Your standard installation path will be opt/hadoop-2.6.0/ instead of /opt/hadoop-2.4.0/ so you'll want to change the references to that in the following two script files that are provided with the course:
startHadoopPseudo
startHadoopStandalone
(2) When you install hadoop, you'll edit either .bashrc or .profile - make sure you also put the reference to the correct folder name in here also. Also, you'll be creating symbolic links to the Hadoop configurations - again make sure you use the correct folder names when you set these up.
What happens if I want to upgrade from 2.4.0 to 2.6.0?
If you have been running with 2.4.0 and wish to upgrade to 2.6.0, you just need to do the following:
(1) Download and unpack the 2.6.0 files from the Hadoop website - place these in /opt/hadoop-2.6.0/
(2) Create the configuration folders under /opt/hadoop-2.6.0/etc as you did for Hadoop 2.4.0 (you can actually copy the configuration folders from your 2.4.0 installation as they'll be valid for 2.6.0)
(3) edit your .bashrc (linux) or .bash_profile (Mac) to change the location of the Hadoop files in the HADOOP_PREFIX and PATH variables from 2.4.0 to 2.6.0
(4) Close your terminal window and open a new one to ensure that the updated environment variables and path varaible are loaded.
(5) run the script resetHDFS - you must be in the Scripts directory to run this script - this will reformat the HDFS file system and will create the symbolic links needed to use the Pseudo configuration. After running this script, enter the JPS command and check that you have the various daemons running (namenode, datanode etc)
(6) Your code, compiled with 2.4.0 will work in 2.6.0 - if you wish to recompile with 2.6.0, remove all the Hadoop jar files from the build path, and then re-add them from the folders under /opt/hadoop/2.6.0/share/hadoop
Subscribe to:
Posts (Atom)