Showing posts with label apache. Show all posts
Showing posts with label apache. Show all posts

Saturday, May 25, 2019

Apache Tomcat 9 Translation

1. Translation Processing Time


By those who haven't learned and used English a lot, messages can be understood easier and faster if those are in their own language. Apache Tomcat project has the following message in English as an example (ref: https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/core/LocalStrings.properties#L20):

applicationContext.addListener.iae.sclNotAllowed=Once the first ServletContextListener has been called, no more ServletContextListeners may be added.

The message means, as you can figure out in the translation process in your brain in the end, that if any ServletContextListener has ever been invoked then you cannot add a new ServletContextListener any more. This can be translated into Korean like this (ref: https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/core/LocalStrings_ko.properties#L20):

applicationContext.addListener.iae.sclNotAllowed=첫번째 ServletContextListener가 호출되고 나면, 더 이상 ServletContextListener들을 추가할 수 없습니다.

Basically English messages require me to take translation processing time in my brain as the cognitive system of mine has been developed enough already with my mother tongue, Korean. It probably takes less  time for some people, but It takes more time for me. I have to spend more time to get the meaning than native or native-like English speakers do. And it does not happen just once or twice, but it keeps occurring again and again, accumulating it into stack of gaps of several hours, days or months. Message translations in software projects may help avoid or reduce the gaps.
    Of course, the translation should be correct. Long time ago, the books published by the "ㅅ" publisher company in South Korea, were very difficult to understand even if those were translated into Korean. Perhaps someone had enough English skills but had never experienced in software development or had never asked proficient engineers to review the translations before publication. I don't think it happened only in IT field. Whether they were about Economics or Statistics, some books (translated) in Korean were harder to understand. Somethings were out of context, with terminologies that were never used in real practices, with weird combinations of Chinese characters to make up new words, or with unnatural passive voices from  too strict literal translations. So, some people used to try to read the original books in English instead, or some others had to rush in head first, including myself.
    One thing clear to me is that once those are translated into correct words, it saves a lot of time in translation process that many people have to spend in otherwise. The more popular software, the more values of correct translation to people.

2. Apache Tomcat Translation with Korean examples


Since Apache Tomcat 9.0.15, almost every English message has been translated into Korean. If you set the default language of the JVM to Korean (`CATALINA_OPTS="-user.country=KR -Duser.language=ko") like the following example, you can see all the internal information, warning or error messages in Korean. I ran Apache Tomcat simply with `bin/catalina.sh run` below.

$ export CATALINA_OPTS="-Duser.country=KR -Duser.language=ko"

$ bin/catalina.sh run

Using CATALINA_BASE:   /Users/tester/tomcat
Using CATALINA_HOME:   /Users/tester/tomcat
Using CATALINA_TMPDIR: /Users/tester/tomcat/temp
...
24-Apr-2019 23:51:08.477 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log 서버 버전 이름:        Apache Tomcat/9.0.18-dev24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log Server 빌드 시각:          Apr 20 2019 19:48:52 UTC
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log Server 버전 번호:         9.0.18.0
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log 운영체제 이름:               Mac OS X
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log 운영체제 버전:            10.14.4
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log 아키텍처:          x86_64
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log 자바 홈:             /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log JVM 버전:           1.8.0_144-b01
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log JVM 벤더:            Oracle Corporation
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE:         /Users/tester/tomcat
24-Apr-2019 23:51:08.481 정보 [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME:         /Users/tester/tomcat
...
24-Apr-2019 23:51:08.488 정보 [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent 프로덕션 환경들에서 최적의 성능을 제공하는, APR 기반 Apache Tomcat Native 라이브러리가, 다음 java.library.path에서 발견되지 않습니다: [/Users/tester/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]24-Apr-2019 23:51:08.749 정보 [main] org.apache.coyote.AbstractProtocol.init 프로토콜 핸들러 ["http-nio-8080"]을(를) 초기화합니다.
24-Apr-2019 23:51:08.775 정보 [main] org.apache.coyote.AbstractProtocol.init 프로토콜 핸들러 ["ajp-nio-8009"]을(를) 초기화합니다.
24-Apr-2019 23:51:08.778 정보 [main] org.apache.catalina.startup.Catalina.load [526] 밀리초 내에 서버가 초기화되었습니다.
24-Apr-2019 23:51:08.806 정보 [main] org.apache.catalina.core.StandardService.startInternal 서비스 [Catalina]을(를) 시작합니다.
24-Apr-2019 23:51:08.807 정보 [main] org.apache.catalina.core.StandardEngine.startInternal 서버 엔진을 시작합니다: [Apache Tomcat/9.0.18-dev]
24-Apr-2019 23:51:08.814 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/docs]을(를) 배치합니다.
24-Apr-2019 23:51:09.052 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/docs]에 대한 배치가 [237] 밀리초에 완료되었습니다.
24-Apr-2019 23:51:09.055 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/manager]을(를) 배치합니다.
24-Apr-2019 23:51:09.117 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/manager]에 대한 배치가 [62] 밀리초에 완료되었습니다.
24-Apr-2019 23:51:09.117 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/examples]을(를) 배치합니다.
24-Apr-2019 23:51:09.911 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/examples]에 대한 배치가 [793] 밀리초에 완료되었습니다.
24-Apr-2019 23:51:09.911 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/ROOT]을(를) 배치합니다.
24-Apr-2019 23:51:09.969 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/ROOT]에 대한 배치가 [57] 밀리초에 완료되었습니다.
24-Apr-2019 23:51:09.969 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/host-manager]을(를) 배치합니다.
24-Apr-2019 23:51:10.019 정보 [main] org.apache.catalina.startup.HostConfig.deployDirectory 웹 애플리케이션 디렉토리 [/Users/tester/tomcat/webapps/host-manager]에 대한 배치가 [50] 밀리초에 완료되었습니다.
24-Apr-2019 23:51:10.022 정보 [main] org.apache.coyote.AbstractProtocol.start 프로토콜 핸들러 ["http-nio-8080"]을(를) 시작합니다.
24-Apr-2019 23:51:10.029 정보 [main] org.apache.coyote.AbstractProtocol.start 프로토콜 핸들러 ["ajp-nio-8009"]을(를) 시작합니다.
24-Apr-2019 23:51:10.031 정보 [main] org.apache.catalina.startup.Catalina.start 서버가 [1,252] 밀리초 내에 시작되었습니다.

Almost every message is now served in Korean: "... 밀리초 내에 서버가 초기화되었습니다" (meaning "the server was initialized in ... ms"), "웹 애플리케이션 디렉토리" (meaning "Web Application Directory"), "배치가 ... 완료되었습니다" (meaning "Deployment ... completed"), etc.
    A screenshot below was taken on the servlet example page for the HelloWorldExample in the default example web application. You can visit http://localhost:8080/, click on "Examples" menu on the top, click on the "Servlet Examples" link and finally click on the "Hello World" example link.

The HelloWorld servlet example (/examples/servlets/servlet/HelloWorldExample)

The Request Info servlet example ("RequestInfoExample") is served in Korean, too:

The RequestInfo servlet example (/examples/servlets/servlet/RequestInfoExample)

When stopping the Apache Tomcat by entering Control-C in the command line console, messages about the stopping process are served in Korean, too:

^C
25-Apr-2019 00:08:47.580 정보 [Thread-5] org.apache.coyote.AbstractProtocol.pause 프로토콜 핸들러 ["http-nio-8080"]을(를) 일시 정지 중
25-Apr-2019 00:08:47.589 정보 [Thread-5] org.apache.coyote.AbstractProtocol.pause 프로토콜 핸들러 ["ajp-nio-8009"]을(를) 일시 정지 중
25-Apr-2019 00:08:47.595 정보 [Thread-5] org.apache.catalina.core.StandardService.stopInternal 서비스 [Catalina]을(를) 중지시킵니다.
25-Apr-2019 00:08:47.615 정보 [Thread-5] org.apache.coyote.AbstractProtocol.stop 프로토콜 핸들러 ["http-nio-8080"]을(를) 중지시킵니다.
25-Apr-2019 00:08:47.618 정보 [Thread-5] org.apache.coyote.AbstractProtocol.stop 프로토콜 핸들러 ["ajp-nio-8009"]을(를) 중지시킵니다.
25-Apr-2019 00:08:47.619 정보 [Thread-5] org.apache.coyote.AbstractProtocol.destroy 프로토콜 핸들러 ["http-nio-8080"]을(를) 소멸시킵니다.
25-Apr-2019 00:08:47.620 정보 [Thread-5] org.apache.coyote.AbstractProtocol.destroy 프로토콜 핸들러 ["ajp-nio-8009"]을(를) 소멸시킵니다.
$

3. How Was It Started?


As you may know, The Apache Software Foundataion (https://apache.org) has helped and nurtured great open source software projects and communities based on voluntary contributions. People get involved in the community through mailing lists of the project in which they found their interests. They ask questions or try to give answers to help other people; those who are interested in testing, development or documentation also discuss how to improve the software and process in the mailing lists and report bugs through the bug tracking systems. The community invite people as committers if someone has made quite amount of contributions in various forms such as bug reporting, helping others through mailing lists, providing patches, helping documentation, and so on. The committers makes changes in the source. Furthermore, committers who has shared with the vision of the community may become members of the Project Management Committee (PMC) and participate in decision making process for the project on behalf of the community. This governance model is known as The Apache Way. See https://www.apache.org/theapacheway/index.html for more detail.
    Anyway, the Apache Tomcat Translation initiative was started based on the community culture with voluntary contribution from individuals. On Nov. 12, 2018, Mark Thomas, the long time Apache Tomcat committer and PMC member, contributing a lot to the Apache Software Foundation too, posted the following message in the user mailing list (ref: https://lists.apache.org/thread.html/d53034694855fcc346e660fb688ddb7886574e0168d6eca70e4ece37@%3Cusers.tomcat.apache.org%3E). Long story short, to solve the fundamental problem that many people have met such as it being very hard to find which resource files to patch unless you're an expert of Apache Tomcat project, the PMC of Apache Tomcat initiated a POEditor project (see the screenshot below) to encourage more people to participate in the collective translation contributions, hoping to ship the contributed resources in Apache Tomcat 9 releases.

From: Mark Thomas
Subject: Translation help wanted
Date: 2018/11/12 11:49:51
List: users@tomcat.apache.org

All,

Apache Tomcat includes some translations for error messages and parts of
the user interface - primarily the Manager web application. We would
like to improve the coverage and quality of these translations.
Accordingly, the Tomcat project has been set up on POEditor, a web-based
service for managing the translation of resource files.

The aim is that anyone who wants to contribute to the translations (it
could be anything from fixing a typo in an existing translation to
adding support for a new language) can create an account and contribute.

If you would like to contribute in this way then the
The Tomcat project can be found here:

https://poeditor.com/join/project/NUTIjDWzrl

Anyone should be able to join up as a contributor. If you are
interested, please sign up and start contributing.

Note: All contributions will be taken as being made under the terms of
the Apache License version 2.

I'm aiming to export the translations on a regular basis to the Tomcat
source code. How regularly will depend on the rate of new/updated
translations but as a minimum, I'm aiming to get any updates into the
next Tomcat 9 release.

If you have any difficulties or questions, please ask here.

Thanks,

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org

If you have a look at the message thread, unbelievably many people reacted with positive voluntary willingness. And it didn't take many days to prove to us how great the outcome could be achieved by the community. Here is again Mark Thomas's message just after 9 days. (ref: https://lists.apache.org/thread.html/3dfab1b732e4223bd846617086b788ee41d35228b684f4714404f2a3@%3Cusers.tomcat.apache.org%3E)

From: Mark Thomas
To: Tomcat Users List
Subject: Translations update
Date: 2018/11/21 09:58:15
List: users@tomcat.apache.org

Hi all,

I wanted to let you know about the amazing progress that is being made
on the Tomcat translations at
https://poeditor.com/join/project/NUTIjDWzrl

In the short time since this effort has started the community has
achieved the following:

- French has increased from 18% to 64% coverage
- Simplified Chinese has been added and has already reached 32% coverage
- Korean has been added and has reached 10% coverage
- German has increased from 2% to 7% coverage
- Brazilian Portuguese has been added and has reached 4% coverage
- Spanish has increased from 42% to 44% coverage

as well as a smaller number of additions and corrections to another 6
languages.

A big thank you to everyone who has contributed.

There is still lots to do so if you would like to help out please join
us at:
https://poeditor.com/join/project/NUTIjDWzrl

Thanks,

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org

In less than 10 days, the portion of translated messages increased from 18% to 64% for French, 42% to 44% for Spanish, 2% to 7% for German. Even better, new languages were added, which had never been in the project before: 32% for Chinese; 10% for Korean; 4% for Brazillian Portugese.
    As of today, April 28, 2019, while I'm writing this article, more than 99% of messages were translated into Korean, and more than 140 volunteers made more than 3,044 contributions in 17 different languages! And the collective work continues. See the POEditor project homepage for detail: https://poeditor.com/join/project/NUTIjDWzrl.





4. And It Continues


Have you ever seen weird translations in software messages in your IT career? Even if the English messages were translated into your language, haven't you ever seen that some messages give an awkward feeling to you sometimes?
    Sharing the concerns, the Apache Tomcat community suggests that we should try to fix this problem together. The suggestion comes not just as an abstract principle, but very concrete and practical solution: the POEditor project (https://poeditor.com/join/project/NUTIjDWzrl). It's never difficult; it's really easy to edit. If you don't understand why the message is used there on which context, you can also ask questions through commenting in the POEditor project. You can share ideas together. Committers may give answers to your questions, or you can discourse with other translators, too.

If you want to join in the common experience helping each other in the community, feel free to join the Apache Tomcat POEditor project (https://poeditor.com/join/project/NUTIjDWzrl). Choose the language you want to translate into.
    Also, to join the users' or developers' mailing lists to ask questions or discuss on anything, see https://tomcat.apache.org/lists.html.


In Today's World where everything is connected to each other digitally, people, located far from each other geographically with time and language differences, start contributing to the open source projects in which they find their interests. It is like people having been collaborating to build shared reservoirs and planting trees on the dams to protect them in commons for thousands of years. People, who have already experienced, now try to figure out how to encourage other people to participate in more easily. Such easy tools as POEditor help people get involved in easier and better. They know that it becomes easier together and they can achieve more benefits in the community together.

Wednesday, November 14, 2018

Apache Jackrabbit Database Usage Patterns and Options to Reduce Database size

Recently, I wrote about how to externalize version storage to an SFTP server backend to reduce database size: https://woonsanko.blogspot.com/2018/11/externalizing-jcr-version-storage-with.html. It is kind of similar case to how to keep the binary content in either AWS S3 bucket or virtual file system such as SFTP or WebDAV server as I described before in https://woonsanko.blogspot.com/2016/08/cant-we-store-huge-amount-of-binary.html. The only difference is, in high level, the former is about version history database table, VERSION_BUNDLE, whereas the latter is about the binary table, DATASTORE.

I'd like to explain how those tables make a significant impact on database size by showing database usage patterns from several real CMS systems. Also, I'd like to list the benefits by reducing the database size at last.

Pattern 1: Huge DATASTORE table for a Simple Website



In the chart, it shows more than 95% of database is consumed by DATASTORE table which stores only binary content such as images, PDF files, etc, not document or configuration nodes and properties. The project implements a CMS based website serving huge amount of binaries. But business users do not probably edit and publish documents often. It is also possible that they migrate some binary data such as images and PDF files from external sources to CMS in order to serve those through website easily.

If they switch the Apache Jackrabbit DataStore component from the default DbDataStore to either S3DataStore or VFSDataStore, they can save more than 95% of database.

Pattern 2: Big DATASTORE table with Modest Document/Node Updates



This site shows modest amount of document and node content in DEFAULT_BUNDLE table which contains the node bundle data of the default Jackrabbit workspace. It means that business users update and publish content in modest size. But still more than 90% of database is consumed for binary content only in DATASTORE table.

The same story goes. If they switch the Apache Jackrabbit DataStore component from the default DbDataStore to either S3DataStore or VFSDataStore, they can save more than 90% of database.

Pattern 3: More Document Oriented CMS



In this site, the DEFAULT_BUNDLE table is relatively bigger than other sites, taking more than 50% of database. It means that content document updates and publication is very important to business users with their CMS system. Business users probably need to update and (re)publish content more frequently for their websites.

As the default workspace data needs to be queried and accessed frequently in the delivery web applications, there's nothing to do more with the DEFAULT_BUNDLE table.
However, they still have consumed more than 20% of database only for binary content in DATASTORE table, and they have consumed up to 20% of database for version history in VERSION_BUNDLE table.
Therefore, if they switch both DataStore component and FileSystem component of VersionManager to alternatives -- S3DataStore / VFSDataStore and VFSFileSystem -- then they can save more than 40% of database.

Pattern 4: More Versioning or Periodic Content Ingestion to CMS



In this site, more than 55% of database is consumed for version history in VERSION_BUNDLE table, and up to 30% of database is consumed for binary content in DATASTORE table.
There are two possibilities: (a) business users update and publish document very often so that it results in a lot of version history data, or (b) there is a batch job periodically running to import external content into CMS with publishing the updated document after imports.
In either case, if they switch both DataStore component and FileSystem component of VersionManager to alternatives -- S3DataStore / VFSDataStore and VFSFileSystem -- then they can save more than 85% of database.

Benefits by Reducing Database Size


What are the benefits by reducing the repository database size by the way?
Here's my list:
  1. Transparent JCR API
    • As you're switching only Apache Jackrabbit internal components, it doesn't affect applications. You don't need to write or use a plugin to manage binary content in a different storage by yourself. The existing JCR API still works transparently.
    • Indexing still works transparently. If you upload a PDF file, it will be indexed and searchable. However, if you implement a custom solution, you need to take care of it by yourself.
  2. Almost unlimited storage for binaries
    • If you use either S3 bucket or SFTP gateway for Google Cloud Platform or even SFTP server directly, then you can store practically almost unlimited amount of binaries and version history in modern cloud computing world.
  3. Cheaper storage
    • Amazon S3 or SFTP server is a lot cheaper than database option. For example, Amazon RDS is more expensive than S3 storage for binary content.
  4. Faster backup, import, migration
    • Apache Jackrabbit DataStore component allows you to do hot-backup and restoration from the backup files to the backend system at runtime.
  5. Build new environment quickly from production data.
    • As the database is small enough in most cases, you can build a new environment from from other environment's backups more quickly.
  6. Save backup storage
    • If you do nightly backup, weekly backup, etc. and you have to keep those backup files for some period (e.g, 1 year), then you might need to worry about the backup disk storage sometimes. If the database size is small enough, your concerns will be more relieved by taking advantage of S3 backup capabilities.
  7. Encryption at rest
    • If you have sensitive PDF files for example, you might want to take advantage of Encryption at REST provided by Amazon S3 or Linux file system.


Tuesday, August 30, 2016

Can't we store huge amount of binary data in JCR?

Can't we store huge amount of binary data in JCR? If you as a software architect have ever met a question like this (e.g, a requirement to store huge amount of binary data such as PDF files in JCR), maybe you could have had a moment depicting some candidate solutions. What is technically feasible and what's not? What is most appropriate to fulfill all the different quality attributes (such as scalability, performance, security, etc.) with acceptable trade-offs? Furthermore, what is more cost-effective and what's not?

Surprisingly, many people have tried to avoid JCR storage for binary data if the amount is going to be really huge. Instead of using JCR, in many cases, they have tried to implement a custom (UI) module to store binary data directly to a different storage such as SFTP, S3 or WebDAV through specific backend APIs.



It somewhat makes sense to separate binary data store if the amount is going to be really huge. Otherwise, the size of the database used by JCR can grow too much, which makes it harder and more costly to maintain, backup, restore and deploy as time goes by. Also, if your application requires to serve the binary data in a very scalable way, it will be more difficult with keeping everything in single database than separating the binary data store somewhere else.

But there is a big disadvantage with this custom (UI) module approach. If you store a PDF file through a custom (UI) module, you won't be able to search the content through standard JCR Query API any more because JCR (Jackrabbit) is never involved in storing/indexing/retrieving the binary data. If you could use JCR API to store the data, then Apache Jackrabbit could have indexed your binary node automatically and you could have been able to search the content very easily. Being unable to search PDF documents through standard JCR API could be a big disappointment.

Let's face the initial question again: Can't we store huge amount of binary data in JCR?
Actually... yes, we can. We can store huge amount of binary data through JCR in a standard way if you choose a right Apache Jackrabbit DataStore for a different backend such as SFTP, WebDAV or S3. Apache Jackrabbit was designed in a way to be able to plug in a different DataStore, and has provided various DataStore components for various backends. As of Apache Jackrabbit 2.13.2 (released on August, 29, 2016), it supports even Apache Commons VFS based DataStore component which enables to use SFTP and WebDAV as backend storage. That's what I'm going to talk about here.

DataStore Component in Apache Jackrabbit

Before jumping into the details, let me try to explain what DataStore was designed for in Apache Jackrabbit first. Basically, Apache Jackrabbit DataStore was designed to support large binary store for performance, reducing disk usage. Normally all node and property data is stored through PersistenceManager, but for relatively large binaries such as PDF files are stored through DataStore component separately.



DataStore enables:
  • Fast copy (only the identifier is stored by PersistenceManager, in database for example),
  • No blocking in storing and reading,
  • Immutable objects in DataStore,
  • Hot backup support, and
  • All cluster nodes using the same DataStore.
Please see https://wiki.apache.org/jackrabbit/DataStore for more detail. Especially, please note that a binary data entry in DataStore is immutable. So, a binary data entry cannot be changed after creation. This makes it a lot easier to support caching, hot backup/restore and clustering. Binary data items that are no longer used will be deleted automatically by the Jackrabbit Garbage collector.

Apache Jackrabbit has several DataStore implementations as shown below:


FileDataStore uses a local file system, DbDataStore uses a relational databases, and S3DataStore uses Amazon S3 as backend. Very interestingly, VFSDataStore uses a virtual file system provided by Apache Commons VFS module.

FileDataStore cannot be used if you don't have a stable shared file system between cluster nodes. DbDataStore has been used by Hippo Repository by default because it can work well in a clustered environment unless the binary data increases extremely too much. S3DataStore and VFSDataStore look more interesting because you can store binary data into an external storage. In the following diagrams, binary data is handled by Jackrabbit through standard JCR APIs, so it has a chance to index even binary data such as PDF files. Jackrabbit invokes S3DataStore or VFSDataStore to store or retrieve binary data and the DataStore component invokes its internal Backend component (S3Backend or VFSBackend) to write/read to/from the backend storage.


One important thing to note is that both S3DataStore and VFSDataStore extend CachingDataStore of Apache Jackrabbit. This gives a big performance benefit because a CachingDataStore caches binary data entries in local file system not to communicate with the backend if unnecessary.


As shown in the preceding diagram, when Jackrabbit needs to retrieve a binary data entry, it invokes DataStore (a CachingDataStore such as S3DataStore or VFSDataStore, in this case) with an identifier. CachingDataStore checks if the binary data entry already exists in its LocalCache first. [R1] If not found there, it invokes its Backend (such as S3Backend or VFSBackend) to read the data from the backend storage such as S3, SFTP, WebDAV, etc. [B1] When reading the data entry, it stores the entry into the LocalCache as well and serve the data back to JackrabbitCachingDataStore keeps the LRU cache, LocalCache, up to 64GB by default in a local folder that can be changed in the configuration. Therefore, it should be very performant when a binary data entry is requested multiple times because it is most likely to be served from the local file cache. Serving a binary data from a local cached file is probably much faster than serving data using DbDataStore since DbDataStore doesn't extend CachingDataStore nor have a local file cache concept at all (yet).

Using VFSDataStore in a Hippo CMS Project

To use VFSDataStore, you have the following properties in the root pom.xml:

  <properties>

    <!--***START temporary override of versions*** -->
    <!-- ***END temporary override of versions*** -->
    <com.jcraft.jsch.version>0.1.53</com.jcraft.jsch.version>

    <-- SNIP -->

  </properties>

Apache Jackrabbit VFSDataStore is supported since 2.13.2. You also need to add the following dependencies in cms/pom.xml:

    <!-- Adding jackrabbit-vfs-ext -->
    <dependency>
      <groupId>org.apache.jackrabbit</groupId>
      <artifactId>jackrabbit-vfs-ext</artifactId>
      <version>${jackrabbit.version}</version>
      <scope>runtime</scope>
      <!--
        Exclude jackrabbit-api and jackrabbit-jcr-commons since those were pulled
        in by Hippo Repository modules.
      -->
      <exclusions>
        <exclusion>
          <groupId>org.apache.jackrabbit</groupId>
          <artifactId>jackrabbit-api</artifactId>
        </exclusion>
        <exclusion>
          <groupId>org.apache.jackrabbit</groupId>
          <artifactId>jackrabbit-jcr-commons</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

    <!-- Required to use SFTP VFS2 File System -->
    <dependency>
      <groupId>com.jcraft</groupId>
      <artifactId>jsch</artifactId>
      <version>${com.jcraft.jsch.version}</version>
    </dependency>

And, we need to configure VFSDataStore in conf/repository.xml like the following example:

<Repository>

  <!-- SNIP -->

  <DataStore class="org.apache.jackrabbit.vfs.ext.ds.VFSDataStore">
    <param name="config" value="${catalina.base}/conf/vfs2.properties" />
    <!-- VFSDataStore specific parameters -->
    <param name="asyncWritePoolSize" value="10" />
    <!--
      CachingDataStore specific parameters:
        - secret : key to generate a secure reference to a binary.
    -->
    <param name="secret" value="123456789"/>
    <!--
      Other important CachingDataStore parameters with default values, just for information:
        - path : local cache directory path. ${rep.home}/repository/datastore by default.
        - cacheSize : The number of bytes in the cache. 64GB by default.
        - minRecordLength : The minimum size of an object that should be stored in this data store. 16KB by default.
        - recLengthCacheSize : In-memory cache size to hold DataRecord#getLength() against DataIdentifier. One item for 140 bytes approximately.
    -->
    <param name="minRecordLength" value="1024"/>
    <param name="recLengthCacheSize" value="10000" />
  </DataStore>

  <!-- SNIP -->

</Repository>

The VFS connectivity is configured in ${catalina.base}/conf/vfs2.properties like the following for instance:

baseFolderUri = sftp://tester:secret@localhost/vfsds

So, the VFSDataStore uses SFTP backend storage in this specific example as configured in the properties file to store/read binary data in the end.

If you want to see more detailed information, examples and other backend usages such as WebDAV through VFSDataBackend, please visit my demo project here:

Note: Hippo CMS 10.x and 11.0 pull in modules of Apache Jackrabbit 2.10.x at the moment. However, there has not been any significant changes nor incompatible changes in org.apache.jackrabbit:jackrabbit-data and org.apache.jackrabbit:jackrabbit-vfs-ext between Apache Jackrabbit 2.10.x and Apache Jackrabbit 2.13.x. Therefore, it seems no problem to pull in org.apache.jackrabbit:jackrabbit-vfs-ext:jar:2.13.x dependency in cms/pom.xml like the preceding at the moment. But it should be more ideal to match all the versions of Apache Jackrabbit modules some day soon.
Update: Note that Hippo CMS 12.x pulls in Apache Jackrabbit 14.0+. Therefore, you can simply use ${jackrabbit.version} for the dependencies mentioned in this article.

Configuration for S3DataStore

In case you want to use S3DataStore instead, you need the following dependency:

    <!-- Adding jackrabbit-aws-ext -->
    <dependency>
      <groupId>org.apache.jackrabbit</groupId>
      <artifactId>jackrabbit-aws-ext</artifactId>
      <!-- ${jackrabbit.version} or a specific version like 2.14.0-h2. -->
      <version>${jackrabbit.version}</version>
      <scope>runtime</scope>
      <!--
        Exclude jackrabbit-api and jackrabbit-jcr-commons since those were pulled
        in by Hippo Repository modules.
      -->
      <exclusions>
        <exclusion>
          <groupId>org.apache.jackrabbit</groupId>
          <artifactId>jackrabbit-api</artifactId>
        </exclusion>
        <exclusion>
          <groupId>org.apache.jackrabbit</groupId>
          <artifactId>jackrabbit-jcr-commons</artifactId>
        </exclusion>
      </exclusions>
    </dependency>

    <!-- Consider using the latest AWS Java SDK for latest bug fixes. -->
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-s3</artifactId>
      <version>1.11.95</version>
    </dependency>

And, we need to configure S3DataStore in conf/repository.xml like the following example (excerpt from https://github.com/apache/jackrabbit/blob/trunk/jackrabbit-aws-ext/src/test/resources/repository_sample.xml):

<Repository>

  <!-- SNIP -->

  <DataStore class="org.apache.jackrabbit.aws.ext.ds.S3DataStore">
    <param name="config" value="${catalina.base}/conf/aws.properties"/>
    <param name="secret" value="123456789"/>
    <param name="minRecordLength " value="16384"/> 
    <param name="cacheSize" value="68719476736"/>
    <param name="cachePurgeTrigFactor" value="0.95d"/>
    <param name="cachePurgeResizeFactor" value="0.85d"/>
    <param name="continueOnAsyncUploadFailure" value="false"/>
    <param name="concurrentUploadsThreads" value="10"/>
    <param name="asyncUploadLimit" value="100"/>
    <param name="uploadRetries" value="3"/>
  </DataStore>

  <!-- SNIP -->

</Repository>

The AWS S3 connectivity is configured in ${catalina.base}/conf/aws.properties in the above example.

Please find an example aws.properties of in the following and adjust the configuration for your environment:

Comparisons with Different DataStores

DbDataStore (the default DataStore used by most Hippo CMS projects) provides a simple clustering capability based on a centralized database, but it could increase the database size and as a result it could increase maintenance/deployment cost and make it relatively harder to use hot backup/restore if the amount of binary data becomes really huge. Also, because DbDataStore doesn't maintain local file cache for the "immutable" binary data entries, it is relatively less performant when serving binary data, in terms of binary data retrieval from JCR. Maybe you can argue that application is responsible for all the cache controls in order not to burden JCR though.

S3DataStore uses Amazon S3 as backend storage, and VFSDataStore uses a virtual file system provided by Apache Commons VFS module. They obviously help reduce the database size, so system administrators could save time and cost in maintenance or new deployments with these DataStores. They are internal plugged-in components as designed by Apache Jackrabbit, so clients can simply use standard JCR APIs to write/read binary data. More importantly, Jackrabbit is able to index the binary data such as PDF files internally to Lucene index, so clients can make standard JCR queries to retrieve data without having to implement custom code depending on specific backend APIs.

One of the notable differences between S3DataStore and VFSDataStore is, the former requires a cloud-based storage (Amazon S3) which might not be allowed in some highly secured environments, whereas the latter allows to use various and cost-effective backend storages including SFTP and WebDAV that can be deployed wherever they want to have. You can take full advantage of cloud based flexible storage with S3DataStore though.

Summary

Apache Jackrabbit VFSDataStore can give a very feasible, cost-effective and secure option in many projects when it is required to host huge amount of binary data in JCR. VFSDataStore enables to use SFTP, WebDAV, etc. as backend storage at a moderate cost, and enables to deploy wherever they want to have. Also, it allows to use standard JCR APIs to read and write binary data, so it should save more development effort and time than implementing a custom (UI) plugin to communicate directly with a specific backend storage.

Other Materials

I have once presented this topic to my colleagues. I'd like to share that with you as well.

Please leave a comment if you have any questions or remarks.

Tuesday, July 10, 2012

Converting Apache/Tomcat Access Logs to CSV

Recently, I had to analyze the Apache / Tomcat access log files, and so I needed to convert the log files into CSV in order to have a chance to use other tools such as spreadsheet.
The conversion shouldn't be hard. I found some scripts (in PHP, AWK, Perl or Ruby) on the internet, but those didn't fit my needs quite well. I didn't want to lose any data such as http method, byte size sent in response, etc. Also, the CSV should contain spreadsheet friendly data format. For example, "2012-07-10 22:30:03" instead of "10/Jul/2012:22:30:03".
So, I ended up writing yet another one by myself. Why not? ;-)
Here's the link to the source:

The script can be executed like the following:

$ perl accesslog2csv.pl access_log_file > csv_output_file.csv

Or, you can redirect STDIN like the following examples:

$ perl accesslog2csv.pl < access_log_file > csv_output_file.csv

$ cat access_log_file | perl accesslog2csv.pl > csv_output_file.csv

Also, you can check invalid log lines by redirecting STDERR, too:

$ perl accesslog2csv.pl < access_log_file > csv_output_file.csv 2> invalid_log_lines.txt


Hope it helps somewhere! :-)

Generating Reports from Web Logs with AWStats

When you want to analyze the web access pattern from the web access logs, AWStats (http://awstats.sourceforge.net) is a handy solution. In my case, I needed to collect summary data from Tomcat access log files and build proper sample data for load testing.
Here's how to generate reports with AWStats from an access log file:

1. Prerequisites


2. Install AWStats

If you extract the compressed AWStats distribution file, then you can find the `awstats_configure.pl' script under `tools' directory. You can start from the script like the following example.
 
$ perl ./awstats_configure.pl

<SNIP>

Do you want to continue setup from this NON standard directory [yN] ? y

<SNIP>

-----> Need to create a new config file ?
Do you want me to build a new AWStats config/profile file (required if first install) [y/N] ? y

-----> Define config file name to create
What is the name of your web site or profile analysis ?
Example: www.mysite.com
Example: demo
Your web site, virtual server or profile name:
> demo


<SNIP>

Press ENTER to continue... 

<SNIP>


Press ENTER to finish...


In the above example, I just installed AWStats just to generate reports offline from access log files without installing onto Apache Web Server for simplicity.
In the second prompt, I just typed 'demo' for a demo analysis task.
The above execution will generate the configuration file for the demo into the `../wwwroot/cgi-bin/awstats.demo.conf' file.

3. Setting the configuration file

Let's open and edit the configuration file for the 'demo' analysis task.
Assuming you're going to analyze a Tomcat access log file, which is in Apache Common Log format.
Here are what you need to edit at least in the configuration file (e.g., `../wwwroot/cgi-bin/awstats.demo.conf'):

# <SNIP>

# Set the access log file path here
LogFile="/var/log/tomcat/access.log"

# <SNIP>

# Examples for Apache combined logs (following two examples are equivalent):
# LogFormat = 1
# <SNIP/>
# For Apache Common Log Format (e.g., Tomcat access log), set it to 4.
LogFormat=4

# <SNIP>

# Set the data directory where AWStats internal data files are stored.
DirData="/var/log/data"

# <SNIP>


With the above configuration (the name of which is 'demo' as shown earlier), this analysis task will analyze the log file configured by 'LogFile' directive, and the internal data will be stored in the directory configured by 'DirData' directive.

4. Update Log Data

Now, you can run AWStats. Go to the `../wwwroot/cgi-bin/' directory and run the following command to update the data from the configured log file:

$ cd ../wwwroot/cgi-bin/
$ perl awstats.pl -config=demo -update

Create/Update database for config "./awstats.demo.conf" by AWStats version 7.0 (build 1.971)
From data in log file "/var/log/tomcat/access.log"...
Phase 1 : First bypass old records, searching new record...
Searching new records from beginning of log file...
Phase 2 : Now process new records (Flush history on disk after 20000 hosts)...
Jumped lines in file: 0
Parsed lines in file: 44217
 Found 0 dropped records,
 Found 0 comments,
 Found 0 blank records,
 Found 1 corrupted records,
 Found 0 old records,
 Found 44216 new qualified records.
 

By the above command, AWStats will reads all the data from the configured log file and update the internal data files.
If you want to delete the data and re-update from the log files, then you can simply delete all the `*.txt' files in the data directory (which was configured by DirData directive above) and run `perl awstats.pl -config=demo -update` again.

5. Generate Reports

Finally, you can generate a report from the updated data by the following command:

#
# First copy the awstats_buildstaticpages.pl script from tools directory 
# if not exists here.
#
$ cp ../../tools/awstats_buildstaticpages.pl ./

$ perl awstats_buildstaticpages.pl -config=demo -month=all -year=2012 -dir=/tmp -awstatsprog=./awstats.pl -buildpdf=/usr/bin/htmldoc

or

$ perl awstats_buildstaticpages.pl -config=demo -month=all -year=2012 -dir=/tmp -awstatsprog=./awstats.pl

Main HTML page is 'awstats.demo.html'.
PDF file is 'awstats.demo.pdf'.

$



Now, the report file is generated into either html files or /tmp/awstats.demo.pdf!

You can skip `-buildpdf ...' option if you do not have HTMLDOC installed.  

Open the pdf file or the main html page now. It contains nice reports!