Monday, March 31, 2008

Sun's Java HTTP Server

Some colleagues have pointed out that Sun's implementation of Java SE 6 JRE includes a built-in Java HTTP Server. There is not much information regarding this online, but there are enough details to start using it for small HTTP service needs.

The main step a developer must perform to make use of the built-in HTTP server is to implement the HttpHandler interface. The Javadoc API description for contains an example of implementing the HttpHandler and includes an example of how to apply TLS/SSL.

The code listing that follows is my own implementation of the HttpHandler interface and is adapted from the example provided in the Javadoc documentation for package

package dustin.examples.httpserver;


 * Simple HTTP Server Handler example that demonstrates how easy it is to apply
 * the Http Server built-in to Sun's Java SE 6 JVM.
public class DustinHttpServerHandler implements HttpHandler
    * Implementation of only required method expected of an implementation of
    * the HttpHandler interface.
    * @param httpExchange Single-exchange HTTP request/response.
   public void handle(final HttpExchange httpExchange) throws IOException
      final String response = buildResponse();
      // UPDATE (01 April 2008): Thanks to Christian Ullenboom
      // (
      // for pointing out the constant used below
      // (see comments on this blog entry below).
      httpExchange.sendResponseHeaders(HttpURLConnection.HTTP_OK, response.length());
      final OutputStream os = httpExchange.getResponseBody();
      os.write( response.getBytes() );

    * Build a String to return to the web browser via an HTTP response.
    * @return String for HTTP response.
   private String buildResponse()
      final StringBuilder response = new StringBuilder();
      response.append("<title>Sun's JVM HttpServer in Action</title>");
      response.append("<h1 style=\"color: blue\">Hello, HttpServer!</h1>");
      response.append("<p>This example shows that the Java SE 6 HttpServer ");
      response.append("included with the Sun JVM is easy to use.</p>");
      return response.toString();

The HttpHandler implementation shown above can be executed with a simple Java application as shown in the next code listing. Note that the port and the URL context that will be used to access the HttpServer are specified in this main executable code rather than in the more generic HttpHandler implementation above.

package dustin.examples.httpserver;


 * Simple executable to start HttpServer for HTTP request/response interaction.
public class Main
   public static final int PORT = 8000;
   public static final int BACKLOG = 0;   // none
   public static final String URL_CONTEXT = "/dustin";

    * Main executable to run Sun's built-in JVM HTTP server.
    * @param args the command line arguments
   public static void main(String[] args) throws IOException
      final HttpServer server
         = HttpServer.create(new InetSocketAddress(PORT), BACKLOG);
      server.createContext(URL_CONTEXT, new DustinHttpServerHandler());
      server.setExecutor(null); // allow default executor to be created

To compile these two classes, one simply performs normal compilation of the Java classes. To run the HTTP Server, one simply runs the main executable just shown. This is demonstrated in the following screen snapshot.

The HTTP Server runs until I instruct it to stop with a CTRL-C. While it is running, I can view the rendered web page by accessing a URL based on port specified in the main class (8000) and a web context specified in the main class ("/dustin"). A screen snapshot of the rendered page in a web browser is shown next.

There are several important observations to be made regarding use of the HTTP Server.

  1. The package makes it clear that this functionality is part of Sun's JRE implementation and is not standard across all JREs for Java SE 6.
  2. It is not shown here, but the Spring framework provides a SimpleHttpInvokerServiceExporter to make use of the Java HTTP Server. Related classes include SimpleHttpServerFactoryBean and SimpleHttpInvokerRequestExecutor. The Spring 2.5, an Update presentation (see slide 16).
  3. The Java HTTP server is not a full-fledged HTTP Server.
  4. This blog entry refers to the Java built-in HTTP server as "Core HTTP Server" and references "mini servlets."
  5. See Bug ID 6270015 ("Support a Light-weight HTTP Server API").

As the code listings and snapshots above demonstrate, it is a straightforward process to implement a simple Java HTTP Server using the Sun-supplied built-in Java HTTP Server.

UPDATE (23 November 2008) Software Development: Easiest way to publish Java Web Services --how to talks about how to use this HTTP Server to publish web services.

Using JEdit with OpenLaszlo and Flex

I like to use JEdit for many "quick and dirty" revisions, especially when I don't want to deal with a full-fledged Java IDE's need for IDE-specific set-up like projects. JEdit can be especially useful for making minor changes to a particular file. I really grew to like JEdit for Ruby text file editing (no compiling, building, or assembling needed in that situation). In this blog entry, I'll demonstrate how easy it is to configure JEdit to employ XML color coding for OpenLaszlo's XML-based LZX language and for Flex's XML-based MXML language.

If you don't do anything to associate LZX or MXML files with XML in JEdit, they will appear as shown in the following two screen snapshots (click on images to see larger versions). The OpenLaszlo file is shown first, followed by the MXML file. Note that there is no color coded syntax in either example.

To enable color syntax highlighting for any file currently open in the JEdit buffer, one can use JEdit's Utilities->Buffer Options->Edit Mode and select "XML" as the Edit Mode. This is shown in the next screen snapshot.

Color syntax highlighting based on XML grammar will then be applied to the file in the buffer. The primary disadvantage to this approach is that it is only temporary.

A more permanent solution is to use JEdit's Utilities->Global Options->Editing tool. The "Change settings for mode:" option should be changed to XML, the checkbox next to "Use default settings" should be unchecked, and then the extensions "lzx" and "mxml" should be added to the text field next to the label "File name glob:"

The next screen snapshots show this window when XML has been selected, but the Default Settings are still in place and the "mxml" and "lzx" extensions have not been added to the "File name glob" field.

After the changes described above have been made ("lzx" and "mxml" added), the window looks like the next screen snapshot.

With the LZX and MXML file types now associated in JEdit as XML source, color highlighting will be available for those types of files. This is demonstrated in the next two screen snapshots, again with the LZX example first, followed by the MXML example. Notice now they both have color highlighting.

You may also have noticed that the JEdit Global Options window in which we set the "File name glob" also has a "First line glob." What this means is that you don't even need to specify lzx and mxml as file extensions in the "File name glob" if you always have an XML prolog in the first line of your LZX and MXML files. For example, I could add the line <?xml version = '1.0'?> to the top of my .mxml and .lzx files and get XML color coding in JEdit via the XML option without ever explicitly associating those file types with XML. However, I like to explicitly associate MXML and LZX with XML in JEdit as shown above to ensure that my LZX and MXML files are displayed with color syntax even if the prolog line is left off.

JEdit does not supply all the features of a full-fledged IDE, but that is also its primary advantage at times when those additional features are not necessary. However, as proven by a recent online survey ("Do you use syntax coloring?"), nearly every Java developer uses color syntax to some degree. The easy steps demonstrated in this blog entry allow one to enjoy syntax coloring for OpenLaszlo LZX files and for Flex MXML files when using JEdit.

Of course, most modern IDEs and text editors offer similar capabilities for associating other file extensions with a known XML file type. The steps to do this are particular to each IDE or text editor.

Using OpenLaszlo Command-Line Compiler (lzc)

I normally use the OpenLaszlo Developer's Console or the URL approach to compiling OpenLaszlo applications. However, there are situations in which it is handy to use the OpenLaszlo command-line compiler (lzc). This blog entry covers use of the OpenLaszlo command-line compiler.

Two very important first steps when using the OpenLaszlo command-line compiler are to

1. Set the environment variable LPS_HOME to the directory in which OpenLaszlo is installed or unzipped. For example, I have OpenLaszlo unzipped in C:\openlaszlo, so I set my environment variable as OPEN_LASZLO=C:\openlaszlo.

2. Place the OpenLaszlo directory with the lzc executable in your path. On my Windows machine, with LPS_HOME set as described in step #1, I add the following to my path: %LPS_HOME%\WEB-INF\lps\server\bin.

You know you have been successful with these first two steps if you can run lzc --help from your root directory and see something like the following screen snapshot (click on snapshot to see larger version).

As this lzc --help example demonstrates, these are several lzc compiler options you can use. These are described in greater detail in Chapter 49 ("Understanding Compilation") of the OpenLaszlo Application Developer's Guide.

The command-line option you might use the most is the one that controls the supported runtime to compiler the OpenLaszlo application code to. Although the --runtime compiler option in the --help display shown above indicates that swf9, j2me, and svg are possible runtimes to specify, these are actually simply reserved for future use and will instead compile to DHTML if specified. The listed runtime options that actually are currently supported are swf7 (the default if no runtime is specified), swf8, and dhtml.

The next screen snapshot shows how to use the command-line lzc compiler to compile to the three primarly supported runtimes.

The commands above will lead to the generated files shown in the next screen snapshot.

As the screen snapshot above shows, the runtime environment of the compiled files is easy to determine from the name of the files.

The -v option for the lzc command-line compiler prints out compiler progress. The output from the -v option is highly verbose, so you'd probably want to specify a log file with that option. The next screen snapshot demonstrates the use of these two command-line options together.

Note how large this log file is. Note also the relative size of the source file (example1.lzx). The log created from the -v option is 4475 times larger than the source file being compiled and nearly 21 times larger than the compiled example1.js file. In other words, you definitely want to specify a log file for the -v output.

There are many advantages to using OpenLaszlo's Deployer's Console or using URLs with request types (lzt) and Laszlo runtime (lzr) as URL parameters. However, there are times when the command-line compiler is advantageous (such as part of scripted builds).

Saturday, March 29, 2008

Viewing Flex Source in a Flash Application

It can be very helpful when a Flash application used as a demonstration or as an illustration of a Flex concept makes its source code readily available for viewing. You can determine if this is the case for a Flash application by right-clicking in an area on the web page that you know to be running in the Flash player. If source code viewing is enabled, the right-click will result in a window like that shown in the next screen snapshot (click on all images in the blog to see larger versions).

When source code view is not enabled, the "View Source" option will not be available when you right-click on the Flash player rendered area of the web page. An example of this is shown in the next screen snapshot.

When one clicks on the "View Source" option, the content of the pointed at URL is displayed in the browser. So, how is the URL specified for which page content is rendered when "View Source" is selected? In MXML, this is accomplished with the viewSourceURL attribute of the MXML root Application tag. A code snippet of this is shown next.

width="500" height="300"

In this code snippet, the URL is local (note the C:\), so I must compile the Flex application that includes this with the -use-network=false option. When I compile the application with this Application opening tag and run the application, I can right-click on the Flash Player section and select "View Source." A separate web browser opens up with the source code of this application.

It is extremely important to note that, in the case shown above, there is no guarantee that the URL pointed in the viewSourceURL attribute actually points to the source MXML code of this application. In fact, it can point at anything, including the C:\ drive of the local machine. If it was expressed simply as viewSourceURL="C:\\" and run, the "View Source" option would actually show a directory listing for the C: drive on that machine. In other words, there is no check or assurance that the file pointed to by the viewSourceURL points to MXML code or even something that exists. That is your job as the developer to ensure the URL is correct.

In many cases, one does not want the URL to point to the local filesystem, but instead wants the URL to point to an online resource accessible via the HTTP protocol.

The code below shows how the URL can reference an online resource via the HTTP protocol.

<mx:Application xmlns:mx=""
width="500" height="300"

This code example also shown that there is no verification that the pointed to URL actually relates to the application. The example above will compile and run fine (assuming that the -use-network option has been removed or set to false). One will even be able to "View Source" with the normal right-click and selection, but will then be taken to the Flex 3 Language Reference provided in the URL rather than the actual source code for the application.

There are several implications to the fact that viewSourceURL can point at any link regardless of any on no relationship of the linked to content to the application. The most obvious is that even well-intentioned developers may have their applications linking to incorrect or outdated versions of the source code. Another implication is that the only way to turn off the ability to view the source code is to remove the viewSourceURL attribute from the Application tag. The most important implication is that there is no enforced connection between the URL whose content is rendered and the application's actual source code unless the developer makes it so.

There are several resources covering the ability to make Flex and ActionScript source code available for viewing. These include Flex Builder documentation on Publishing Application Source Code, Viewing the Application Source Code, Flex 2 Beta ViewSource, and viewSourceURL: Publish Source in Flex.

JAXB with NetBeans 6 and 6.1

In my article Better JPA, Better JAXB, and Better Annotations Processing with Java SE 6, I showed how to use the JAXB reference implementation's xjc compiler to create Java binding classes from a source W3C XML Schema definition document. In articles on a specification implementation or technology, I prefer to show use of these things directly rather than through an IDE because I believe it is useful to understand how things work "under the covers" and because not everyone uses the same IDE. However, in my actual work not meant to provide examples to others, I make heavy use of an IDE's ability to do things and hide details of the implementation from me. In this blog entry, I'll demonstrate how easy it is to generate JAXB Java classes that bind to underlying XML with NetBeans 6.0 and 6.1 beta.

NetBeans offers file creation wizards for many types of files and W3C XML Schema files (.xsd) are no exception. The screen snapshot that follows (click on all screen snap shots in this blog to see larger version) shows my simple XML Schema file after the wizard created the root tag (xsd:schema with its namespaces and I had added the body of the document.

The screen snapshot above shows NetBeans' "Source" view of this schema file. NetBeans also provides a "Design" view of an XML Schema file and this is shown next for the example schema.

Once you have access to an XML Schema file, it is very easy with NetBeans to create Java classes that bind to that schema. I will demonstrate this next with the SimpleExample.xsd schema file shown in the two screen snapshots above. To use JAXB and NetBeans to generate Java classes from an existing XML Schema file, simply choose "JAXB Binding" from the allowable file types (under "XML" category). This is shown in the next screen snapshot.

Once I have selected "JAXB Binding" as my new file type, the next screen in the wizard is the "New JAXB Binding: Configure XML Binding" window and that is shown in the next screen snapshot.

Once I click on the "Finish" button, NetBeans will generate the appropriate Java classes that bind to XML that conforms to the provided XML Schema. Both the "Projects" view and the "Files" view show the results of this. From the "Files" view, I can see the generated .class files as shown in the following screen snapshot.

The "Projects" view shows that my newly created JAXB Bindings has been added to the project with the arbitrary name I provided it ("DustinArbitraryName"). The next screen snapshot shows this.

In the "Projects" view, there are a few things of interest that I can do at this point. First, I can right-click on "JAXB Bindings" and click on the option "Regenerate Java Code" to regenerate the Java .class files based on the XML Schema. This would be useful, of course, whenever the XML Schema changes. However, it is important to note here that the XML Schema associated with this "JAXB Bindings" entry is a copy of the original XML schema from which I generated the JAXB Bindings. In other words, if I change the original schema file, those changes won't be reflected in the generated Java classes even when I use this "Regenerate Java Code" option. There is an intermediate step I must take in that case and this is discussed next.

To update the "JAXB Bindings" copy of the XML Schema with any changes I make to the original XML Schema file, I must right-click on the schema's name (in this case SimpleExample.xsd) under the JAXB Bindings name (in this case "DustinArbitraryName"). I can then click on the option "Refresh" to refresh this JAXB Binding's copy of the XML schema to match the original one that I have changed. This is shown in the next screen snapshot.

To summarize how to update the JAXB Bindings whenever you update your original XML Schema, you must perform the following steps:
1. Edit and save the original XML Schema file in NetBeans.
2. Right-click on the schema file's name under the JAXB Bindings area and click on the "Refresh" option. This will update the JAXB Binding's copy to match the original copy of the XML Schema file.
3. Right-click on JAXB Bindings and click on the "Regenerate Java Code" option. New Java .class files will be generated for the new XML Schema file.

There are times you may want to regenerate the .class files ("Regenerate Java Code") without refreshing the XML Schema (don't need to use "Refresh"). The most likely scenario for this is when you want to change something about the JAXB mapping used in generating the Java classes, but still use the same existing XML Schema source file. In such a case, you right-click on the JAXB Bindings name (in this case DustinArbitraryName) and select the option "Change JAXB Options." This is shown in the next screen snapshot.

When the option to "Change JAXB Options" is selected a window very similar to the one we saw earlier for creating a new JAXB Bindings is shown. The only difference is that this is titled "Change JAXB Options" rather than "New JAXB Options." As with the "New JAXB Options" window, the "Change JAXB Options" window allows for the developer to provide a binding file, a catalog file, whether vendor JAXB extensions should be allowed, and five JAXB compiler options.

The NetBeans 6.0 "Getting Started with JAXB" tutorial covers what these JAXB Bindings options are. The tutorial describes what it means to check Catalog File, Binding File, Extension, and the five compiler options.

NetBeans 6 (and 6.1 beta) make using JAXB to create Java binding classes from an existing XML Schema document easier for the developer, especially considering that the generated class files are automatically placed in the appropriate classes output directory in a NetBeans project. The wizard simplifies the process, but leaves many options open for customization such as vendor extensions and custom binding files.

By the way, as you've likely heard by now, there is a NetBeans IDE 6.1 Beta Blogging Contest that ends on April 19. This blog entry has been submitted as an entry into that contest.

Friday, March 28, 2008

Remote JMX, RMI, and JMXServiceURL

When using Remote JMX with RMI, one of the comma gotchas results from problems with the JMX connection server address string supplied to the JMX connector server and to the JMX connector client. This address string is encapsulated in the JMXServiceURL class and useful details about it are found in its Javadoc-generated documentation. This document tells us that JMXServiceURL represents an "address of a JMX API connector server" and that it needs to meet the format service:jmx:protocol:sap.

The Javadoc-generated documentation for the package and for the package are even more useful in understanding the substance of the JMX connector server address (JMXServiceURL).

The package explains the JMX connector server address in greater detail and even offers an example of such an address for an RMI connector: service:jmx:rmi:///jndi/rmi://myhost:1099/myname where the emphasized text indicates your own host and port. This package's Javadoc description also informs us that the first rmi in the address specifies the RMI connector while the second rmi in the address specifies the RMI registry.

As you would expect, the Javadoc description for the RMI-specific package has even more specific RMI details than does the more general package. This documentation provides deep coverage of the many different ways that an RMI connection can be established and looked up.

I prefer to use a JMXServiceURL of the format service:jmx:rmi://<host>:<port>/jndi/rmi://<host>:<port>/jmxrmi or service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi for specifying my standard Remote JMX Connectors via RMI.

A nice pictorial representation of the JMX Connector Server address is available in the image at This image shows which portion of the address makes up the RMI connector and which part makes up the RMI registry address. Monitor Your Applications with JConsole - Part 3 provides coverage of the connector server address using RMI. Chapter 9 ("Distributed Services and Connectors") of "JMX Accelerated Howto" covers the JMX Connector Server address in significant detail and provides several examples. The JMX Technology for Remote Management section of Getting Started with Java Management Extensions: Developing Management and Monitoring Solutions contains an example of using a JMX RMI Connector Server. Finally, the Remote Management lesson of the Java Management Extensions Trail of the Java Tutorials contains very useful background on using Remote JMX. The example code that comes with the JMX tutorial is especially useful.

Hashtable Versus HashMap

I find myself using ArrayList and HashMap as my two favorite "default" Java collections and I use the other collections significantly less frequently than these two favorites. Although I do not use Hashtable or Vector very often, they have their place and their many similarities with my two preferred Collections make it more interesting to focus on how they are different.

In this blog entry, I'll focus on how Hashtable is different from HashMap, but most of these differences apply to the contrasting of Vector and ArrayList as well.

Hashtable has been around longer than HashMap. Hashtable was introduced with JDK 1.0 and so predates the birth of the Java Collections Framework and the naming convention of Collections in that framework. The other two types of collections that pre-dated the Java Collections Framework were Vector and array.

The Java Collections Framework was introduced with JDK 1.2 (among the most exciting of the Java releases) and introduced many collections including my favorites HashMap and ArrayList. Also with JDK 1.2, the Hashtable and Vector classes were retrofitted to be part of this new Java Collections Framework and, as part of this, to implement interfaces from that framework. Hashtable was altered to implement Map and Vector was similarly altered to implement List.

Although Hashtable was retrofitted to implement Map and become part of the Java Collections Framework, its name obviously could not be changed because of backwards compatibility issues. Therefore, its "table" portion of the name could not have the "t" changed to uppercase "T." In fact, there is a naming convention for implementations and interfaces in the Java Collection Framework and neither Hashtable nor Vector could be changed to meet this convention. Most implementations in the Java Collections Framework (such as ArrayList, HashMap, and TreeSet) have their name formed from an implementation detail as the first portion of the name and having the name end with the interface implemented.

The main Java Collections Framework documentation contains an Overview section with a "Collections Implementations" subsection that talks about this naming convention and shows a table with interfaces on the rows and implementations along the columns. Not all collections are shown in this table. For example, Map collections not shown in the table include ConcurrentHashMap, EnumMap, WeakHashMap.

So, other than name, what differences now exist between a Hashtable and a HashMap if they both are Maps? This question is most quickly answered in the Javadoc documentation for HashMap, which says (in a parenthetical statement), "(The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.)" The HashMap's allowance of nulls is one distinguishing difference from Hashtable. The other distinguishing difference (not being synchronized) is also the distinguishing difference between ArrayList and Vector.

ArrayList and HashMap can be used in multiple thread situations if the synchronization wrappers are used. This is easily accomplished through a call to the Collections class like this: Collections.synchronizedCollection(collectionOfYourChoice);.

In this blog entry, I tried to have each link to Java Collection Framework reference a different resource on this widely used piece of Java. The references below are to other articles and blog entries covering the differences between Hashtable and HashMap.

jGuru: What are the differences between HashMap and Hashtable?

Difference Between HashMap and HashTable

Hashtable, HashMap, & Properties

Java Hashmap or Hashtable

Hashtable or HashMap?

HashMap Versus Hashtable

Java HashMap Example

Thursday, March 27, 2008

Five Things to Like About Spring ... Even if You Don't Use It Directly

A while back (May 2005), Bruce Tate's article Five Things I Love About Spring was published. As one would expect, this article focused on things that the Spring Framework does for developers using Spring. There are numerous other articles on the subject and perhaps no better source on the architectural justification for and advantages of using Spring exists than the "book that started it all" (J2EE Design and Development).

In this blog entry, I briefly look at five things that the Spring framework has done or can do for Java developers who don't directly use the Spring Framework in their applications.

1. Spring prompted J2EE to become much more usable as Java EE 5.

While many Java EE developers use Spring in one way or another, some still develop with Java EE without Spring. These users still benefit from Spring because Spring's presence has helped motivate change to Java EE, particularly in terms of increased usability.

2. Spring inspired many of the Java EE 5 advancements such as dependency injection and POJO-based development.

Not only did Spring motivate improvements to Java EE, but Spring also provided examples of techniques and approaches that enterprise developers like that Java EE could emulate. Some of these approaches were even around before Spring, but Spring popularized them as the framework itself rapidly rose in popularity.

3. Spring with Dependencies makes many useful libraries and frameworks available in a single download.

More than once, I have benefited from having a necessary open source product or library readily available from the Spring dependencies directory. This may seem like a minor thing, but it saves me a little time and potentially a lot of aggravation from having to find the appropriate web site to download the appropriate files and then unload the files. This effort is not a big deal in most cases, but there are times when I just want to try something out that I have read very quickly and the time saved from having to download it is greater than the time I spend trying it out. Spring's bundling of these dependencies can be a little reassuring regarding compatibilities and versions of the dependencies.

4. Spring developers help identify useful open source libraries and frameworks.

Related to the point above, I have learned about interesting of promising libraries and open source products simply by browsing the list of Spring dependencies. The Spring developers seem to pick the best of the compatible open source software and so, in many respects, have done research into these products that I can now take advantage of.

5. Spring provides developers with examples of design and Java coding best practices.

The Spring Framework probably shows more flexibility and extensibility than many of us need because of its nature as a common framework. That being said, we all still want a certain degree of flexibility and extensibility in our applications and Spring provides many good examples of how to achieve this. Note that I am not talking about Spring best practices here, but am rather talking about Java best practices that the Spring framework enables, encourages, and/or provides examples for. Because Spring is open source, we can view and get ideas from its code base.

Wednesday, March 26, 2008

Flex Conditional Compilation with Clipboard

A particularly handy use of Flex's conditional compilation is to control when debugging information is output or otherwise handled. I discussed placement of fault information on the clipboard in a previous blog entry. In that blog entry, I included a call to place fault information on the clipboard (flash.system.System.setClipboard(<<yourFaultString>>)) in the standard ActionScript fault handler method I had written. The only downside to doing it this way is that fault information would also be placed on all users' clipboards during production use of this code. The virtue of conditional compiling in this case is clear -- we can use conditional compiling to only include the flash.system.System.setClipboard call when in development mode or debugging mode.

Here is an example of using Flex's conditional compilation to only place fault information on the clipboard when in debugging mode. This example is a simple example of a fault handler method that might be used and which will only place the fault information on the clipboard when in debugging mode.

* Generic failure/fault handler.
* @param aEvent Failure/Fault event to be handled.
function handleFault(aEvent:FaultEvent):void
const mName:String = "handleFault(aEvent)";
const fault:Fault = aEvent.fault;
const messageString:String =
"faultCode: " + fault.faultCode + "\n\n"
+ "faultString: " + fault.faultString + "\n\n"
+ "message: " + fault.message + "\n\n"
+ "rootCause: " + fault.rootCause;
trace( mName + ": " + messageString );;

The lines 17-20 above emphasize the conditional compiling and the call to setClipboard that will only occur when CONFIG::debugging is true.

With that code created, the next step is to set the value of CONFIG::debugging. This is set by using the -define compiler flag for the mxmlc compiler.

The next code snippet shows how this might be accomplished in Ant (Flex Ant tasks could also be used and are available from Adobe Labs for Flex 2 and are built-in to Flex 3).

<exec executable="mxmlc">
<arg value="${}/FlexSlidesExamples.mxml" />
<arg line="-debug=${flex.debug}" />
<arg line="-define=CONFIG::debugging,${flex.debug}" />
<arg line="-use-network=${}" />
<arg line="-output ${}/${file.swf}" />

For convenience, I set the flex.debug property in the file as either flex.debug=true or flex.debug=true. Note that this value is the same value I use to control whether the .swf compiled file will support debugging.

In a real application, of course, you'd probably want to use conditional compiling as shown here to limit even what is shown on the pop-up because it is not likely you want to expose all of those details to users who mostly don't want to see them either.

UPDATE (22 April 2008): As asked in the comment section below and as I responded, conditional compilation appears to be a feature unique to Flex 3 (or, at least, Flex 2 doesn't support the Flex 3 mxmlc compiler option as shown in this blog entry). The two screen snapshots below show conditional compilation with Flex 3's and Flex 2's mxmlc command. As the screen shots show, it compiles successfully for Flex 3, but does not recognize the -define option for Flex 2.

Successful Conditional Compilation with Flex 3

Unsuccessful Conditional Compilation with Flex 2

I have one other update to this blog entry (also updated on 22 April 2008). When one adds the CONFIG::debugging label to the Flex code, it must thereafter be provided to the mxmlc compiler. The next screen snapshot demonstrates the error message seen if this is not passed into the mxmlc command with the -define option.

Monday, March 24, 2008

Online JMX Resources

As I stated in a previous blog entry on aging JMX books, the most popular JMX books are each several years old and are therefore somewhat dated. There have been many advancements in the world of JMX since these books were published. Fortunately, there are many good online resources on JMX and I list some of them here. Although this blog entry is being originally created and posted on 24 March 2008, I plan to add additional resources to this entry as I come across them or remember them. I probably won't add a new date for each new entry.


Java Management Extensions

Java SE Monitoring and Management Guide

Understanding JMX Technology

Getting Started with Java Management Extensions: Developing Management and Monitoring Solutions

JMX 1.4 Specification

Java Tutorial: JMX Trail

Java Management Extensions (Dr. Dobb's)

Instrumenting Applications with JMX

JMX Accelerated How-to

Managing the Unmanaged

Enabling Component Architectures with JMX

An Introduction to JMXRemote

JMX Mail Forum

Using the JMX API for Monitoring and Management

Creating Manageable Systems with JMX, Spring, AOP, and Groovy

What is JMX?

Superior App Management with JMX

MXBeans in Java SE 6: Bundling Values Without Special JMX Client Configurations


Using JConsole to Monitor Applications

Monitoring Local and Remote Applications Using JMX 1.2 and JConsole

Using JConsole

Monitor Your Applications with JConsole - Part 1

Monitor Your Applications with JConsole - Part 2

Monitor Your Applications with JConsole - Part 3

JMX, JConsole, and You


Using JMX to Manage Web Applications

Designing Manageable Java EE Platform-Based Applications with JMX API


JMX Best Practices

Apply JMX Best Practices

Best Practices and Design Patterns for JMX Development

Making Optimal Use of JMX in Custom Application Monitoring Systems
(Also appears to go by title Practical Considerations When Instrumenting Applications with JMX)

Design Patterns for JMX and Application Manageability


Summary of JMX-Related Specifications


My Blog

Luis-Miguel Alventosa's Blog

Mandy Chung's Blog

Jean-Francois Denise's Blog

Daniel Fuchs's Blog

Eamonn McManus's Blog









Saturday, March 22, 2008

Comparing Unix/Linux, PowerShell, and DOS Commands

The following lists some of my favorite Unix commands and maps the associated PowerShell and DOS commands, if any. If there is one Unix command I would love to have in PowerShell, it is the grep command with its regular expression support. I have noticed significant improvement in Vista's search capabilities compared to earlier versions of Microsoft operating systems that I have used and I would love to see that harnessed in PowerShell so that I could use it from the command line. The table appears a ways down, so scroll down to it.

UPDATE (24 March 2008): Note that I have updated this table with information on a grep equivalent and on the availability of less as an extension. Thanks to Kirk Munro for pointing both of these out (see Comments) and to Jeffrey Snover for his write-up of Select-String at
Thanks also to Marco Shaw for pointing out that start-transcript (which can be closed with stop-transcript) provides functionality like Unix's script command. Thanks to Jonathan for mentioning tasklist as an alternative to ps and mentioning F7 for a graphical presentation of history commands.

Unix/LinuxPowerShellWindows Vista DOS
(stop with CTRL-D)
(stop with stop-transcript)
history / hhistory / h
lessless (extension) 

Type ‘man’ without any options in PowerShell command-line
to see long list of supported commands and scripting keywords.

The Windows PowerShell Quick Reference and Getting Started with Windows PowerShell are also useful resources.

Thursday, March 20, 2008

NetBeans 6.1: A JavaScript IDE

I have started using NetBeans 6.1 (beta) and have found its JavaScript support to be welcome and helpful. JavaScript and DOM differences among the major browsers have long been a source of deep consternation for web developers and NetBeans does much to deal with browser idiosyncrasies. In this blog entry, I will demonstrate some of these NetBeans JavaScript functions.

One frustration of working with JavaScript and DOM across different browser implementations is the availability of objects and methods in one browser but not another. NetBeans auto-completion feature for JavaScript/DOM really helps here. The next screen snapshot (click on all screen snapshots to see larger versions) displays the auto-completion popup for the document object and the getElementById method that returns a Document.

The line through the getElementById method may look familiar to Java developers as used to indicate a deprecated Java method. In this case, it indicates that one or more of my targeted web browsers does not support this method. The auto-completion popup clearly shows that Internet Explorer 5.5 and Internet Explorer 6 are my "targeted browsers" that do not support this particular method.

The "targeted browsers" idea is useful because there may be situations in which we have the luxury of knowing which browsers we must support or, better yet, which we don't need to support. If we don't need to support a particular browser, that is one more source of potential inconsistency we do not need to worry about.

It is easy in NetBeans 6.1 to specify the targeted browsers. This is done by selecting Tools -> JavaScript Browser Compatibility from the top menu bar of NetBeans. This is shown in the next screen snapshot.

With a click on "JavaScript Browser Compatibility" from the "Tools" drop-down, one is taken to the "Choose Supported Browsers" window. The next screen snapshot shows this window, which indicates that Microsoft Internet Explorer 5.5 and later versions are targeted browsers. If I select "7 and later" (as shown in the next screen snapshot), then Microsoft Internet Explorer 5.5 and 6 are no longer considered my targeted browsers.

With the targeted browsers now only including Microsoft Internet Explorer versions 7 or later, the same auto-completion for document.getElementById no longer shows the method struck out and also does not list any targeted browsers not supporting it. This is because I removed the browser versions that did not support this method from my targeted browsers. See this in the next screen snapshot.

In the process of demonstrating the ability of NetBeans to provide browser compatibility information, my examples also showed how easy it is to use JavaScript method completion in exactly the same way one would use Java method completion in NetBeans.

Another advantage we take for granted with Java IDEs is color-coded syntax. NetBeans 6.1 provides this for JavaScript as shown in the next Hello World example.

With Java IDEs, we have become accustomed to the IDE warning us about problems that may not break compilation, but are likely not what we really want either. NetBeans 6.1 brings this ability to JavaScript. In the next screen snapshot, we can see that NetBeans warns us about an assignment operator that is most likely supposed to be an equality comparison.

Another example of one of these warnings follows, this time demonstrating a warning that the given function will return a value in some cases, but not in all cases.

Of course, while warnings about things that compile but are probably not intended is nice, but it is also nice to know about syntax problems before trying to run the JavaScript application. A JavaScript IDE may never be able to do this as well as a Java IDE due to Java's static typing, but NetBeans 6.1 does provide feedback on definitive errors that cannot be due to dynamic typing. This is demonstrated in the next screen snapshot that shows a curly brace missing. It is nice to see this before trying to run the web application using this JavaScript.

My final example for this blog entry shows a double message with an error message about using a reserved keyword ("double") and a warning message about a declaration that doesn't do anything ("no side effects").

There are many reasons that I have always preferred Java over JavaScript and one of these has been the vastly superior tooling (especially IDEs) for Java. However, with NetBeans 6.1, that gap has been narrowed considerably. With NetBeans 6.1, development of Ajax/DHTML applications and other web applications should be considerably easier. I also look forward to taking advantage of this NetBeans 6.1 JavaScript support with my OpenLaszlo development.

More details on NetBeans 6.1's new JavaScript support features can be found in this NetBeans wiki.

By the way, as you've likely heard by now, there is a NetBeans IDE 6.1 Beta Blogging Contest that ends on April 19. This blog entry has been submitted as an entry into that contest.

Wednesday, March 19, 2008

A Standard is Only As Good As Its Implementations

There are many potential advantages in software development when adhering to standards. These advantages include the ability to shift between implementations of a standard, the ability to connect two disparate pieces of software together when both meet a standard, a greater pool of developers with knowledge and skill from working with different implementations of the standard, and greater access to literature and other resources on the standard. I called these potential advantages earlier because they are not always realized. In fact, most of these advantages can only be gained if there are multiple implementations of high quality and high degree of standard conformance.

For example, if there is only one good implementation of a standard, then there is no real ability to switch between implementations of that standard. You have no more freedom in such a case than if you used a non-standard approach. Likewise, if there is only one or two strong implementations of a specific standard, you are less likely to enjoy the benefits of a larger skill set and greater availability of related literature than a popular non-standard product might enjoy.

So why does all of this matter? Because too many developers (and I have made this mistake myself before) say things like, "I know its not the best, but it's the standard." I do like to work with standards when possible, but not at the cost of many other positive features and characteristics, especially if I cannot really enjoy the benefits of standardization because there is only one or a very small number of decent implementations of that standard.

In fact, de facto standards can be just as useful if they are widely popular. For example, Ant, Maven, Struts, and the Spring Framework are not themselves standards, but they enjoy many of the benefits discussed above (wide developer talent pool, wide literature set, etc.). Many of these products, especially Spring, are heavily standard-based even if they are not themselves standardized.

The DHTML debacle that is the effort to write Ajax/DHTML applications for highly incompatible major browsers (due to lack of HTML/CSS/DOM/ECMAScript standards implementation) is evidence that standards and standards compliance do matter and do make our lives as software developers easier. It would be nice if the major browsers were all more standards-compliant (and Microsoft Internet Explorer 8 may help with this), but until then I prefer using Flex or OpenLaszlo so that I don't have to worry about browser standardization. It is more important for me to enjoy the benefits of standard code provided by Adobe (Flash) or Laszlo Systems (Flash/generated DHTML) than to cling to a claim of using web development "standards" that really aren't so standard. The problem of the incompatible browsers really comes down not to problems with the underlying specifications, but rather is such a big problem because of the lack of compliance of the major implementation of those standards.

Unfortunately, standards can only be as good as their implementations and the most dominant implementations are not always the best or even good implementations of these standards.

EclipseLink Will be JPA 2.0 Reference Implementation

Oracle Corporation provided TopLink Essentials as the JPA 1.0 reference implementation. Two days ago (17 March 2008), it was announced that EclipseLink was selected as the JPA 2.0 reference implementation. The GlassFish announcement on this states that EclipseLink essentially includes most of Oracle's full-fledged TopLink product.

Because Oracle did provide TopLink Essentials as the Java Persistence API 1.0 reference implementation and because Oracle has been an ardent supporter of JPA, it is not too surprising that the Oracle-led EclipseLink project will be the JPA 2.0 reference implementation.

Monday, March 17, 2008

UnsupportedOperationException and OperationNotSupportedException

I have seen several cases when OperationNotSupportedException is used when UnsupportedOperationException is the better choice. In this blog entry, I outline some of the key differences between these two similar sounding exceptions.

While the names of the exceptions OperationNotSupportedException and UnsupportedOperationException sound very similar, the Javadoc descriptions for each class indicate that they definitely have different intended uses.

The javax.naming.OperationNotSupportedException is a checked exception in the naming package that extends javax.naming.NamingException (which extends java.lang.Exception) and has been available since J2SE 1.3. As part of the java.naming package, it is not surprising that this exception is specifically related to naming services. Specifically, as the description for this class states, this exception is intended for situations in which a particular naming Context implementation does not support the invoked method (operation).

For situations other than those described above for OperationNotSupportedException, the better choice for a standard exception to indicate an unsupported method or operation is the UnsupportedOperationException. The UnsupportedOperationException is an unchecked exception in the java.lang package that (as an unchecked exception) extends java.lang.RuntimeException and has been available since J2SE 1.2.

As part of the more general java.lang package, UnsupportedOperationException definitely seems more general in nature than javax.naming.OperationNotSupportedException. The description for this class states that this class is part of the Collections Framework.

The NetBeans IDE uses UnsupportedOperationException when it automatically implements methods for a newly created class implementing an interface. This is useful because it allows NetBeans to implement methods of defined in the interface so that the code will compile. At runtime, however, the unchecked exception UnsupportedOperationException will be thrown when these methods are implemented. The UnsupportedOperationException allows the implementation class to be generated and compile, but does not hide the runtime issue of nothing actually being implemented in case a client calls that method.

The NetBeans example described above demonstrates a potentially appropriate use for UnsupportedOperationException, but there are many situations in which its use is inappropriate. For the Collections Framework, there are valid reasons for its use, but in our own designs it can often be a "code smell" of design problems. As stated in Peter Williams's blog entry Why Java is Not My Favorite Language - Reason 16, it is sensible that any method that is important enough to be defined in an interface should generally be implemented in advertised implementations of that interface. This excerpt from Effective Java covers the primary use of UnsupportedOperationException (an implementation does not implement a method in its interface that other implementations do implement).

A useful forum thread on UnsupportedOperationException is available at Another useful resource related to UnsupportedOperationException is the Java Collections API Design FAQ (see #1 and #2).

JMX, IOException, and NameNotFoundException

When using JMX connectors, it is important that the connector client URL and the connector server URL appropriately match. The only protocol required of remote JMX is RMI and so it is often RMI that is used in conjunction with remote JMX. There are several different types of exceptions that might occur depending on different JMX RMI URL problems, but the focus of this blog entry is on the NameNotFoundException.

The first code listing shown is a very simple JMX client.

package client;


public class JmxSpringClientMain
public static void main(String[] aArguments)
final String jmxRmiStr =
final JMXServiceURL jmxUrl = new JMXServiceURL(jmxRmiStr);
final JMXConnector jmxConnector = JMXConnectorFactory.connect(jmxUrl);
final MBeanServerConnection mbsc = jmxConnector.getMBeanServerConnection();
System.out.println( "MBean Count: " + mbsc.getMBeanCount() );
System.out.println( "MBean Default Domain: " + mbsc.getDefaultDomain() );
catch (MalformedURLException badUrl)
System.err.println( "ERROR: Problem with JMXServiceURL based on "
+ jmxRmiStr + ": " + badUrl.getMessage() );
catch (IOException ioEx)
System.err.println( "ERROR: IOException trying to connect to JMX "
+ "Connector Server: " + ioEx.getMessage() );

The JMX Connector server that the above client would connect to could be set up programmatically, but I am going to use the Spring framework to do this for me. Here is an excerpt from the appropriate Spring configuration file.


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

<bean id="stateHandler"
<property name="state" value="Colorado" />
<property name="capital" value="Denver" />

<bean class="org.springframework.jmx.export.MBeanExporter">
<property name="beans">
<entry key="dustin.example:name=jmx,type=spring"
value-ref="stateHandler" />
<property name="assembler" ref="assembler" />

<bean id="assembler"
<property name="managedInterfaces">

<bean id="registry" class="org.springframework.remoting.rmi.RmiRegistryFactoryBean">
<property name="port" value="1099"/>

<bean id="serverConnector"
<property name="objectName" value="connector:name=rmi"/>
<property name="serviceUrl"


If you compare the lines highlighted in the JMX Client code and in the Spring XML to be used to set up the server, you'll see that they match. If the last part of the JMXServiceURL is changed (from "dustinjmxrmi" to say "jmxrmi"), an IOException is thrown by the client with the following output:

Failed to retrieve RMIServer stub: javax.naming.NameNotFoundException: jmxrmi

Note that this is thrown as an IOException when the client code tries to access the JMX connector server. The NameNotFoundException mentioned in the IOException message extends NamingException.

The IOException/NameNotFoundException is a result of having matching host and port information between connector client and server, but having a mismatch on the name at the end of the JMX RMI URL. If the server is not up at all or is at a different host/port combination than specified by the JMX client, the exceptions underlying the resulting IOException in this case are ServiceUnavailableException and two varieties of ConnectException. The exact message is:

Failed to retrieve RMIServer stub: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: Connection refused: connect]

Like the NameNotFoundException, ServiceUnavailableException also extends NamingException.

For additional details on constructing a JMX RMI URL with JMXServiceURL, see Monitor Your Applications with JConsole - Part 3, JMX Accelerated HowTo (see section titled "RMI and JMXMP URLs"), the Sun JMX forum entry JMXServiceUrls, and DongWoo Lee's JMX RMI Proxy/NAT Connection image.