Wednesday, October 19, 2011

Running IDEA on Windows 64bit JVM

For reasons unknown the Windows download for IntelliJ IDEA defaults to running in a 32bit JRE that comes as part of the distribution. This is particularly annoying on Windows since the 32bit JVM is limited to roughly a gigabyte and a half of heap space.

To run IDEA with an arbitrary JVM the "JDK_HOME" environment variable needs to be set (not "JAVA_HOME"). Then IDEA can be started with the "idea.bat" file in the "bin" directory of the installation.

By default the batch file will leave an extra console window around. This can be avoided by (a) changing from the java.exe executable to the javaw.exe (just add that extra w where the JAVA_EXE is configured) and (b) adding the "start" command in front of the actual call to java.

Note that start assumes that the first quoted parameter is a window name, not a command. That means you need one more quoted parameter in front with arbitrary content. The full line in my file (IDEA 10.5.2) looks like this:


Sunday, November 22, 2009

Running the last launch configuration in Eclipse

Just in case I forget one day: if you want to get rid of that annoying guessgame Eclipse 3.5 does when hitting F11 you have to change the "Launch Operations" option in Window->Preferences->Run/Debug->Launching to "Always launch the previously launched application". Afterwards the F11 will do what I always want it do. Admittably the Ctrl+Alt+X shortcuts are unwieldy, but they are good enough to launch something specific in the rare cases I want that.

Monday, July 27, 2009

Functional testing of WARs with Maven

I just finished setting up a configuration where a separate Maven project is used to run some JWebUnit tests on a web application. The focus is on regularly checking the application functionality, so we don't care about running on the production stack (yet), we rather want an easy to run and quick test suite.

We assume that the application we want to run defaults to some standalone setup, e.g. by using something like an in-memory instance of hsqldb as persistence solution. It also has to be available via Maven, in the simple case by running a "mvn install" locally, the nicer solution involves running your own repository (e.g. using Artifactory).

Here is a sample POM for such a project he rest of the story is in the inline documentation:

=== POM for testing a war file coming out of a separate build. ===

Basic idea:
- run Surefire plugin for tests, but in the integration-test phase
- use Cargo to start an embedded Servlet engine before the integration-tests, shut it down afterwards
- use tests written with JWebUnit with the HtmlUnit backend to do the actual testing work

This configuration can be run with "mvn integration-test".
<project xmlns="" xmlns:xsi=""
<name>MyApp Test Suite</name>
<description>A set of functional tests for my web application</description>
<!-- The Cargo plugin manages the Servlet engine -->
<!-- start engine before tests -->
<!-- stop engine after tests -->
<!-- we use a Jetty 6 -->
<!-- don't let Jetty ask for the Ctrl-C to stop -->
<!-- the actual configuration for the webapp -->
<!-- pick some port likely to be free, it has be matched in the test definitions -->
<!-- what to deploy and how (grabbed from dependencies below) -->
<!-- configure the Surefire plugin to run integration tests instead of the
running in the normal test phase of the lifecycle -->
<!-- we use the HTML unit variant of JWebUnit for testing -->
<!-- we depend on our own app, so the deployment setup above can find it -->

Note that in this case we depend on a SNAPSHOT release of our own app. This means that you can nicely run this test in your continuous integration server, triggered by the successful build of your main application (as well as any change in the tests itself). If you use my favorite CI server Hudson, then you can even tell it to aggregate the test results onto the main project.

Having a POM like this you can launch straight into the JWebUnit Quickstart. A basic setup needs only the POM and one test case in src/test/java. Of course you can use a different testing framework if you want to.

You can also switch the Servlet engine or deploy into a running one. The Cargo documentation is pretty decent, so I recommend looking at that. They certainly know more Servlet engines than I do.

If you use profiles to tweak Cargo in the right way, you should be able to run the same test suite against a proper test system using the same stack as your production environment. I haven't gone there yet, but I intend to. The setup used here is really intended for regular tests after each commit. By triggering them automatically on the build server the delay doesn't burden the developers, but they are still fast enough to give you confidence in what you are doing.

Tuesday, May 12, 2009

Inserting new lines with vim regular expressions

If you want to insert new lines with sed-style regular expressions in vim, the usual '\n' doesn't work. The trick is to produce a '^M' by hitting Ctrl-V,Ctrl-M. For example:

:%s/word /word^M/g

will replace the space after the word "word" with a line break across the whole file.

Thursday, April 9, 2009

Git-SVN for lazy people

Here's what I think is a minimal set of git-svn commands that are sufficient to work offline with a Subversion repository:

Cloning a copy of the repository

Get your copy with

git svn clone http://my.svn.repository/path -s

The -s indicates a standard trunk/tags/branches layout, otherwise you need to provide --trunk, --tags and --branches to provide your layout.

Committing changes

Commit changes with

git commit -a -v

in the top folder. The -a implicitly adds all files, the -v puts a diff into your commit message editor. Note that there is no "svn" in this command, we are using standard git here.

Fetching upstream changes

This grabs all upstream changes and merges them into your working copy:

git svn fetch && git svn rebase

Pushing your changes

To publish the changes you committed locally do:

git svn dcommit

The "d" in "dcommit" is deliberate. The command pushes all changes you have committed locally back into the original Subversion repository. Your working copy needs to be clean and up-to-date for this to work.

Obviously there is much more you can do, particularly since the local copy is in many ways just a normal git repository. But we wouldn't be lazy if we read about anything more than we need, would we?

Thursday, March 12, 2009

Letting Maven use local libraries

Sometimes it is useful to have Maven use some JARs that are checked in with the project, e.g. since you don't trust anything else or you just can't find a library in an existing repository and don't want to run your own.

It actually turns out to be pretty easy. Add something like this to your pom.xml:

<name>Local repository in project tree</name>

Afterwards Maven will try to find a repository in the "lib" folder in your project. Let's assume you want this dependency in the local repository (hopefully something with better chosen IDs, but I decided to keep an existing pattern here):


What you need for this is a folder structure representing those three bits of information: groupId, artifactId and version (in that order), i.e. lib/owlapi/owlapi/2.2.0. In that folder Maven expects at least four things: the JAR file, a matching POM file and the SHA1 sums for both. All files are named using the artifactId and the version, i.e. "owlapi-2.2.0.jar" for the JAR, the same for the POM but with the different extension.

To get the SHA1 sums you can do this (assuming you run bash and have the sha1sum tool available):

for file in `ls`; do sha1sum $file > $file.sha1; done

If you are nice and add the sources (has to be a JAR with "-sources" in the name), then the result looks something like this:

The POM itself just contains the basic description (and possible dependencies):

<description>Java interface and implementation for the Web Ontology Language OWL</description>
<name>GNU Lesser General Public License, Version 3</name>

That's all there is -- next time you run Maven it should "download" the files from that location. Considering that the POM also provides a uniform way of storing licence, source and version information it is actually a quite useful approach for storing dependencies inside your project's space.

Wednesday, February 25, 2009

Using Firefox profiles

Firefox can get awfully slow once you have all your favorite web-developer tools installed. As much as I love Firebug and the Web Developer extension, they certainly do not enhance your browsing experience.

The solution to that problem is to run multiple instances of Firefox with different profiles. If you run Firefox like this:

firefox -Profilemanager -no-remote

then you get a dialog to configure profiles. You can start a preconfigured profile with:

firefox -p $PROFILE_NAME -no-remote

Every profile has its own settings, most importantly it has its own configuration for extensions. By creating a profile with only the basic Firefox extensions and one with all the web-developer tools you can have a fast browser and one that does all its magic for development purposes.

Note that the "-no-remote" is not always necessary. It stops Firefox from reusing an already running instance, so you really need it only if there is a Firefox open that uses a profile you don't want. Having "-no-remote" will cause errors if there is an instance running that uses the same profile -- it's quite a straightforward error message, though.