Posts Tagged ‘tika’

Articles

The case for the digital Babel fish

In General on 2010-11-24 by Jukka Zitting Tagged: , , ,

Just like Arthur Dent, who  after inserting a Babel fish in his ear could understand Vogon poetry, a  computer program that uses Tika can understand Microsoft Word documents.” This is how Tika in Action, our book on Apache Tika, introduces it’s subject. Download the freely available first chapter to read the the full introduction.

Chris Mattmann and I started writing the Tika in Actionbook for Manning at the beginning of this year, and we’re now well past the half-way post. If we keep up this pace, the book should be out in print by next Summer! And thanks to the Manning Early Access Program (MEAP), you can already pre-order and access an early access edition of the book at the Tika in Action MEAP page.

If you’re interested, use the “tika50” code to get a 50% early access discount when purchasing the MEAP book. You’ll still receive updates on all new chapters and of course the full book when it’s finished. Note that this discount code is valid only until December 17th, 2010.

We’re also very interested in all comments and other feedback you may have about the book. Use the online forum or contact us directly, and we’ll do our best to make the book more useful to you!

Update: The book is out in print now! Use the “tika37com” code for a 37% discount on the final book.

Articles

Forking a JVM

In ASF,Java on 2010-05-27 by Jukka Zitting Tagged: , , , ,

The thread model of Java is pretty good and works well for many use cases, but every now and then you need a separate process for better isolation of certain computations. For example in Apache Tika we’re looking for a way to avoid OutOfMemoryErrors or JVM crashes caused by faulty libraries or troublesome input data.

In C and many other programming languages the straightforward way to achieve this is to fork separate processes for such tasks. Unfortunately Java doesn’t support the concept of a fork (i.e. creating a copy of a running process). Instead, all you can do is to start up a completely new process. To create a mirror copy of your current process you’d need to start a new JVM instance with a recreated classpath and make sure that the new process reaches a state where you can get useful results from it. This is quite complicated and typically depends on predefined knowledge of what your classpath looks like. Certainly not something for a simple library to do when deployed somewhere inside a complex application server.

But there’s another way! The latest Tika trunk now contains an early version of a fork feature that allows you to start a new JVM for running computations with the classes and data that you have in your current JVM instance. This is achieved by copying a few supporting class files to a temporary directory and starting the “child JVM” with only those classes. Once started, the supporting code in the child JVM establishes a simple communication protocol with the parent JVM using the standard input and output streams. You can then send serialized data and processing agents to the child JVM, where they will be deserialized using a special class loader that uses the communication link to access classes and other resources from the parent JVM.

My code is still far from production-ready, but I believe I’ve already solved all the tricky parts and everything seems to work as expected. Perhaps this code should go into an Apache Commons component, since it seems like it would be useful also to other projects beyond Tika. Initial searching didn’t bring up other implementations of the same idea, but I wouldn’t be surprised if there are some out there. Pointers welcome.

Articles

Buzzword conference in June

In General on 2010-05-14 by Jukka Zitting Tagged: , , , , ,

Like the Lucene conference I mentioned earlier, Berlin Buzzwords 2010 is a new conference that fills in the space left by the decision not to organize an ApacheCon in Europe this year. Going beyond the Apache scope, Berlin Buzzwords is a conference for all things related to scalability, storage and search. Some of the key projects in this space are Hadoop, CouchDB and Lucene.

I’ll be there to make a case for hierarchical databases (including JCR and Jackrabbit) and to present Apache Tika project. The abstracts of my talks are:

The return of the hierarchical model

After its introduction the relational model quickly replaced the network and hierarchical models used by many early databases, but the hierarchical model has lived on in file systems, directory services, XML and many other domains. There are many cases where the features of the hierarchical model fit the needs of modern use cases and distributed deployments better than the relational model, so it’s a good time to reconsider the idea of a general-purpose hierarchical database.

The first part of this presentation explores the features that differentiate hierarchical databases from relational databases and NoSQL alternatives like document databases and distributed key-value stores. Existing hierarchical database products like XML databases, LDAP servers and advanced filesystems are reviewed and compared.

The second part of the presentation introduces the Content Repositories for the Java Technology (JCR) standard as a modern take on standardizing generic hierarchical databases. We also look at Apache Jackrabbit, the open source JCR reference implementation, and how it implements the hierarchical model.

and:

Text and metadata extraction with Apache Tika

Apache Tika is a toolkit for extracting text and metadata from digital documents. It’s the perfect companion to search engines and any other applications where it’s useful to know more than just the name and size of a file. Powered by parser libraries like Apache POI and PDFBox, Tika offers a simple and unified way to access content in dozens of document formats.

This presentation introduces Apache Tika and shows how it’s being used in projects like Apache Solr and Apache Jackrabbit. You will learn how to integrate Tika with your application and how to configure and extend Tika to best suit your needs. The presentation also summarizes the key characteristics of the more widely used file formats and metadata standards, and shows how Tika can help deal with that complexity.

I hear there are still some early bird tickets available. See you in Berlin!

Articles

Lucene conference in May

In General on 2010-04-21 by Jukka Zitting Tagged: , , , ,

This year there is no ApacheCon Europe, but a number of more focused events related to projects at Apache and elsewhere are showing up to fill the space.

The first one is Apache Lucene EuroCon, a dedicated Lucene and Solr user conference on 18-21 May in Prague. That’s the place to be if you’re in Europe and interested in Lucene-based search technology (or want to stop by for the beer festival). I’ll be there presenting Apache Tika, and the abstract of my presentation is:

Apache Tika is a toolkit for extracting text and metadata from digital documents. It’s the perfect companion to search engines and any other applications where it’s useful to know more than just the name and size of a file. Powered by parser libraries like Apache POI and PDFBox, Tika offers a simple and unified way to access content in dozens of document formats.

This presentation introduces Apache Tika and shows how it’s being used in projects like Apache Solr and Apache Jackrabbit. You will learn how to integrate Tika with your application and how to configure and extend Tika to best suit your needs. The presentation also summarizes the key characteristics of the more widely used file formats and metadata standards, and shows how Tika can help deal with that complexity.

The rest of the conference program is also now available. See you there!

Articles

Putting POI on a diet

In General on 2009-10-16 by Jukka Zitting Tagged: , , , ,

The Apache POI team is doing an amazing job at making Microsoft Office file formats more accessible to the open source Java world. One of the projects that benefits from their work is Apache Tika that uses POI to extract text content and metadata from all sorts of Office documents.

Apache POI

However, there’s one problem with POI that I’d like to see fixed: It’s too big.

More specifically, the ooxml-schemas jar used by POI for the pre-generated XMLBeans bindings for the Office Open XML schemas is taking up over 50% of the 25MB size of the current Tika application. The pie chart below illustrates the relative sizes of the different parser library dependencies of Tika:

Relative sizes of Tika parser dependencies

Both PDF and the Microsoft Office formats are pretty big and complex, so one can expect the relevant parser libraries to be large. But the 14MB size of the ooxml-schemas jar seems excessive, especially since the standard OOXML schema package from which the ooxml-schemas jar is built is only 220KB in size.

Does anyone have good ideas on how to best trim down this OOXML dependency?

Design a site like this with WordPress.com
Get started