XML Parsers Performance Analysis in Java: - JAXB vs STax vs Woodstox


Introduction

Last couple of weeks I started working on how to deal with large amounts of XML data in a resource-friendly way considering performance and other factors. The main problem that I was looking to solve was how to process large XML files to manipulate the data in chunks while at the same time providing upstream/downstream systems with some data to process.
Of course, for a long time we have been using JAXB technology; the main advantage of using JAXB is the quick time-to-market; if one possesses an XML schema, there are tools out there to auto-generate the corresponding Java domain model classes automatically (Eclipse Indigo, Maven jaxb plugins, ant tasks, to name a few). The JAXB API then offers a Marshaller and an Unmarshaller to write/read XML data, mapping the Java domain model.
The JAXB as a solution has a disadvantage that JAXB keeps the whole objectification of the XML schema in memory, so the obvious question was: "How would our infrastructure cope with large XML files (e.g. in my case with a number of elements > 10,000) if we were to use JAXB?” I could have simply produced a large XML file, then a client for it and find out about memory consumption.
As one probably knows there are mainly two approaches to processing XML data in Java: DOM and SAX. With DOM, the XML document is represented into memory as a tree; DOM is useful if one needs cherry-pick access to the tree nodes or if one needs to write brief XML documents. On the other side of the spectrum there is SAX, an event-driven technology, where the whole document is parsed one XML element at the time, and for each XML significative event,  callbacks are "pushed" to a Java client which then deals with them (such as START_DOCUMENT, START_ELEMENT, END_ELEMENT, etc). Since SAX does not bring the whole document into memory but it applies a cursor like approach to XML processing it does not consume huge amounts of memory. The drawback with SAX is that it processes the whole document start to finish; this might not be necessarily what one wants for large XML documents. In my scenario, for instance, I'd like to be able to pass to downstream systems XML elements as they are available, but at the same time maybe I'd like to pass only 100 elements at the time, implementing some sort of pagination solution. DOM seems too demanding from a memory-consumption point of view, whereas SAX seems to coarse-grained for my needs. 
I remembered reading something about STax, a Java technology which offered a middle ground between the capability to pull XML elements (as opposed to pushing XML elements, e.g. SAX) while being RAM-friendly. I then looked into the technology and decided that STax was probably the compromise I was looking for; however I wanted to keep the easy programming model offered by JAXB for manipulating the data of the xml elements, so I really needed a combination of the two. While investigating STax, I came across Woodstox;  this open source project promises to be a faster XML parser than many others, so I decided to include it in my benchmark as well.  I now had all elements to create a performance analysis to give me memory consumption and processing speed metrics when processing large XML documents.
The Performance Analysis plan
In order to create a benchmark (performance analysis) I needed to do the following:
  • Create an XML schema which defined my domain model. This would be the input for JAXB to create the Java domain model
  • Create three large XML files representing the model, with 1,000 / 10,000 / 100,000 / 1,000,000 elements respectively
  • Have a pure JAXB client which would unmarshall the large XML files completely in memory
  • Have a STax/JAXB client which would combine the low-memory consumption of SAX technologies with the ease of programming model offered by JAXB
  • Have a Woodstox/JAXB client with the same characteristics of the STax/JAXB client (in few words I just wanted to change the underlying parser and see if I could obtain any performance boost)
  • Record both memory consumption and speed of processing (e.g. how quickly would each solution make XML chunks available in memory as JAXB domain model classes)
  • Make the results available graphically, since, as we know, one picture tells one thousand words.

 The Domain Model XML Schema




    
        
            
            
            
            
            
            
            
        
        
    

    
        
            
        
    

    
    



I decided for a relatively easy domain model, with XML elements representing people, with their names and addresses. I also wanted to record whether a person was active.


JAXB to create the Java model

I wanted to use Maven to utilize the ease it provides build systems. This is the POM I defined for this little benchmark program:


    4.0.0

     com.poc.xmlparser.example
    large-xml-parser
    1.0
    jar

    large-xml-parser
    http://www.jemos.co.uk

    
        UTF-8
    

    
        
            
                org.apache.maven.plugins
                maven-compiler-plugin
                2.3.2
                
                    1.6
                    1.6
                
            
            
                org.jvnet.jaxb2.maven2
                maven-jaxb2-plugin
                0.7.5
                
                    
                        
                            generate
                        
                    
                
                
                    ${basedir}/src/main/resources
                    
                        **/*.xsd
                    
                    true
                    
                        -enableIntrospection
                        -XtoString
                        -Xequals
                        -XhashCode
                    
                    true
                    true
                    
                        
                            org.jvnet.jaxb2_commons
                            jaxb2-basics
                            0.6.1
                        
                    
                
            
            
                org.apache.maven.plugins
                maven-jar-plugin
                2.3.1
                
                    
                        
                            true
                             com.poc.xmlparser.tests.xml.XmlPullBenchmarker
                        
                    
                
            
            
                org.apache.maven.plugins
                maven-assembly-plugin
                2.2
                
                    ${project.build.directory}/site/downloads
                    
                        src/main/assembly/project.xml
                        src/main/assembly/bin.xml
                    
                
            
        
    

    
        
            junit
            junit
            4.5
            test
        
        
            uk.co.jemos.podam
            podam
            2.3.11.RELEASE
        
        
            commons-io
            commons-io
            2.0.1
        
        
        
            com.sun.xml.bind
            jaxb-impl
            2.1.3
        
        
            org.jvnet.jaxb2_commons
            jaxb2-basics-runtime
            0.6.0
        
        
            org.codehaus.woodstox
            stax2-api
            3.0.3
        
    



Just few things to notice about this pom.xml.
  • I use Java 6, since starting from this version Java contains all the XML libraries for JAXB, DOM, SAX and STax. 
  • To auto-generate the domain model classes from the XSD schema, I used the excellent maven-jaxb2-plugin, which allows, amongst other things, to obtain POJOs with toString, equals and hashcode support.
The POM also has the declaration for the jar plug-in, to create an executable jar for the benchmark program and the assembly plug-in to distribute an executable version of the benchmark program. The source code for the analysis attached to this post, so if you want to build it and run it yourself, just unzip the project file, open a command line and run:
$ mvn clean install assembly:assembly
This command will place *-bin.* files into the folder target/site/downloads. To run the benchmark program use (-Dcreate.xml=true will generate the XML files. Don't pass it if you have these files already, e.g. after the first run):
$ java -jar -Dcreate.xml=true large-xml-parser-1.0.jar

Test Data Creation

To successfully run this program I needed test data for which I used PODAM, a Java tool to auto-fill POJOs and JavaBeans with data. The code is as simple as:

JAXBContext context = JAXBContext
                .newInstance("example.xmlparser.poc.com.large_file");

        Marshaller marshaller = context.createMarshaller();
        marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
        marshaller.setProperty(Marshaller.JAXB_ENCODING, "UTF-8");

        PersonsType personsType = new ObjectFactory().createPersonsType();
        List persons = personsType.getPerson();
        PodamFactory factory = new PodamFactoryImpl();
        for (int i = 0; i < nbrElements; i++) {
            persons.add(factory.manufacturePojo(PersonType.class));
        }

        JAXBElement toWrite = new ObjectFactory()
                .createPersons(personsType);

        File file = new File(fileName);
        BufferedOutputStream bos = new BufferedOutputStream(
                new FileOutputStream(file), 4096);

        try {
            marshaller.marshal(toWrite, bos);
            bos.flush();
        } finally {
            IOUtils.closeQuietly(bos);
        }




The XmlPullBenchmarker generates three large XML files under ~/xml-benchmark:
  • large-person-10000.xml (Approx 3M)
  • large-person-100000.xml (Approx 30M)
  • large-person-1000000.xml (Approx 300M)
Each file looks like the following:


    
        oEfblFDpgh
        HKOOm6SdqG
        _q7o2rOY7g
        2uoBmuwHCp
        zkXRzrnBR3
        JFFgyz3p06
        ejAFKehS2P
    
    
        ha2qkY3An3
        Ql8Jb4D6Tu
        2wEio0AEDE
        nahAfgIiWS
        5c7s8jirqY
        kr1rA7jLbH
        QyPtAomZBI
    

    [..etc]





Each file contains 1,000 / 10,000 / 100,000 / 1,000,000 <person> elements.

The Execution environments

This benchmark program is run on different platforms / environments to analyze the results:
  • Ubuntu 10, 32-bit running as Virtual Machine on a Windows XP, with CPU Core 2 Duo, P8400 @2.26GHz, 4GB RAM of which 2GB dedicated to the VM. JVM: 1.6.0_25, Hotspot
  • Windows XP, hosting the above VM, therefore with same processor. JVM, 1.6.0_24, Hotspot
  • Ubuntu 10, 32-bit, 2GB RAM, dual core. JVM, 1.6.0_24, OpenJDK


Strategy for XML unmarshalling

For the XML unmarshalling, three different strategies are used:
  • Pure JAXB
  • STax + JAXB
  • Woodstox + JAXB

Pure JAXB unmarshalling

The code to unmarshall large XML files using JAXB is below:

private void readLargeFileWithJaxb(File file, int nbrRecords)
            throws Exception {

        JAXBContext ucontext = JAXBContext
                .newInstance("example.xmlparser.poc.com.large_file");
        Unmarshaller unmarshaller = ucontext.createUnmarshaller();

        BufferedInputStream bis = new BufferedInputStream(new FileInputStream(
                file));

        long start = System.currentTimeMillis();
        long memstart = Runtime.getRuntime().freeMemory();
        long memend = 0L;

        try {
            JAXBElement root = (JAXBElement) unmarshaller
                    .unmarshal(bis);

            root.getValue().getPerson().size();

            memend = Runtime.getRuntime().freeMemory();

            long end = System.currentTimeMillis();

            LOG.info("JAXB (" + nbrRecords + "): - Total Memory used: "
                    + (memstart - memend));

            LOG.info("JAXB (" + nbrRecords + "): Time taken in ms: "
                    + (end - start));

        } finally {
            IOUtils.closeQuietly(bis);
        }

    }



I also accessed the size of the underlying PersonType collection to "touch" in memory data. BTW, debugging the application showed that all 10,000 elements were indeed available in memory after this line of code.


JAXB + STax

For STax, XMLStreamReader is used to iterate through all <person> elements, and pass each in turn to JAXB to unmarshall it into a PersonType domain model object. The code follows:

        // set up a StAX reader
        XMLInputFactory xmlif = XMLInputFactory.newInstance();
        XMLStreamReader xmlr = xmlif
                .createXMLStreamReader(new FileReader(file));

        JAXBContext ucontext = JAXBContext.newInstance(PersonType.class);

        Unmarshaller unmarshaller = ucontext.createUnmarshaller();

        long start = System.currentTimeMillis();
        long memstart = Runtime.getRuntime().freeMemory();
        long memend = 0L;

        try {
            xmlr.nextTag();
            xmlr.require(XMLStreamConstants.START_ELEMENT, null, "persons");

            xmlr.nextTag();
            while (xmlr.getEventType() == XMLStreamConstants.START_ELEMENT) {

                JAXBElement pt = unmarshaller.unmarshal(xmlr,
                        PersonType.class);

                if (xmlr.getEventType() == XMLStreamConstants.CHARACTERS) {
                    xmlr.next();
                }
            }

            memend = Runtime.getRuntime().freeMemory();

            long end = System.currentTimeMillis();

            LOG.info("STax - (" + nbrRecords + "): - Total memory used: "
                    + (memstart - memend));

            LOG.info("STax - (" + nbrRecords + "): Time taken in ms: "
                    + (end - start));

        } finally {
            xmlr.close();
        }

    }



Note that this time when creating the context, I had to specify that it was for the PersonType object, and when invoking the JAXB unmarshalling I had to pass also the desired returned class type, with:
JAXBElement<PersonType> pt = unmarshaller.unmarshal(xmlr,
                        PersonType.class);
Note that I don't do anything with the object; just create it, to keep the benchmark as truthful and possible by not introducing any unnecessary steps.


JAXB + Woodstox

For Woodstox, the approach is very similar to the one used with STax. In fact Woodstox provides a STax2 compatible API, so all I had to do was to provide the correct factory and...bang! I had Woodstox under the cover working.

    private void readLargeXmlWithFasterStax(File file, int nbrRecords)
            throws FactoryConfigurationError, XMLStreamException,
            FileNotFoundException, JAXBException {

        // set up a Woodstox reader
        XMLInputFactory xmlif = XMLInputFactory2.newInstance();
        XMLStreamReader xmlr = xmlif
                .createXMLStreamReader(new FileReader(file));

        JAXBContext ucontext = JAXBContext.newInstance(PersonType.class);

        Unmarshaller unmarshaller = ucontext.createUnmarshaller();

        long start = System.currentTimeMillis();
        long memstart = Runtime.getRuntime().freeMemory();
        long memend = 0L;

        try {
            xmlr.nextTag();
            xmlr.require(XMLStreamConstants.START_ELEMENT, null, "persons");

            xmlr.nextTag();
            while (xmlr.getEventType() == XMLStreamConstants.START_ELEMENT) {

                JAXBElement pt = unmarshaller.unmarshal(xmlr,
                        PersonType.class);

                if (xmlr.getEventType() == XMLStreamConstants.CHARACTERS) {
                    xmlr.next();
                }
            }

            memend = Runtime.getRuntime().freeMemory();

            long end = System.currentTimeMillis();

            LOG.info("Woodstox - (" + nbrRecords + "): Total memory used: "
                    + (memstart - memend));

            LOG.info("Woodstox - (" + nbrRecords + "): Time taken in ms: "
                    + (end - start));

        } finally {
            xmlr.close();
        }

    }



In the above code, I pass STax2 XMLInputFactory. This uses the Woodstox implementation.

The Java main loop

When all the required files are generated and are in place (you obtain this by passing -Dcreate.xml=true as mentioned above), the main performs the following:

            System.gc();
            System.gc();

            for (int i = 0; i < 10; i++) {

                main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-1000.xml"), 1000);
                main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-10000.xml"), 10000);
                main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-100000.xml"),
                        100000);
                main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-1000000.xml"),
                        1000000);

                main.readLargeXmlWithStax(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-1000.xml"), 1000);
                main.readLargeXmlWithStax(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-10000.xml"), 10000);
                main.readLargeXmlWithStax(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-100000.xml"),
                        100000);
                main.readLargeXmlWithStax(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-1000000.xml"),
                        1000000);

                main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-1000.xml"), 1000);
                main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-10000.xml"), 10000);
                main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-100000.xml"),
                        100000);
                main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER
                        + File.separatorChar + "large-person-1000000.xml"),
                        1000000);
            }



First, it invites the GC to run although as we all know this is at the GC Thread discretion. Then it executes each strategy 10 times, to normalize RAM and CPU consumption. The final data are then collected by running an average on the ten runs.


The Performance Analysis benchmark results

Below are the diagrams which show memory consumption across the different running / execution environments, when unmarshalling 1,000 / 10,000 / 100,000 / 1,000,000 files respectively.
You will probably notice that memory consumption for STax related strategies often show a negative value. This means that there was freer memory after unmarshalling all elements than there was at the beginning of the unmarshalling loop; this, in turn, suggests that the GC ran a lot more with STax than with JAXB. This is logical if one thinks about it; since with STax we don't keep all objects into memory there are more objects available for garbage collection. In this particular case I believe the PersonType object created in the while loop gets eligible for GC and enters the young generation area and then it gets reclamed by the GC. This, however, should have a minimum impact on performance, since we know that claiming objects from the young generation space is done very efficiently.

Performance in Windows Environment:-


 
 
 
 
Performance in Ubuntu Environment:-

 
 
 

Conclusion

From the above analysis, the results on all three different environments, although with some differences, tell us the same story:
  • If you are looking for performance (e.g. XML unmarshalling speed), choose JAXB
  • If you are looking for low-memory usage (and are ready to sacrifice some performance speed), then use STax. 
Based on the above Test Scenario (for my personal need) my opinion is that I wouldn't go for Woodstox, but I'd choose either JAXB (if I needed processing speed and could afford the RAM) or STax (if I didn't need top speed and was low on infrastructure resources). Both these technologies are Java standards and part of the JDK starting from Java 6.

Reference

Source Code: Dowload large-xml-parser-1.0-project.zip

Executable: Download large-xml-parser-1.0-bin.zip

Data files: Download Jaxb vs Stax vs Woodstox.zip

1 comment:

skjolber said...

So why not read the files into memory first and avoid skewing the results?