The Colorado Software Summit has posted four more presentations from the 2008 edition of the conference. The newly available presentations are Subbu Allamaraju's "Pragmatic REST" and "RESTful Web Applications - Fact Versus Fiction" and Anton Bar's "Open Web Operating System" and "Open Web File System."
Subbu's bullet and comment in one of his presentations led to a controversy regarding JSF not being fixable from a REST perspective.
In addition to the release of these four new presentations, the Call for Papers for Colorado Software Summit 2009 has also been issued.
Dustin's Software Development Cogitations and Speculations My observations and thoughts concerning software development (general development, Java, JavaFX, Groovy, Flex, ...). Select posts from this blog are syndicated on DZone and Java Code Geeks and were formerly syndicated on JavaWorld.
Monday, March 30, 2009
GPS Systems and IDEs: Helpful or Harmful?
Every time I visit the Boston area, I am reminded of how nice it is to have easy access to GPS navigation. GPS often enables me to get places faster and with more confidence than I would otherwise have. However, use of GPS is not without its issues. First, the GPS directions are not always the best directions and are sometimes even downright wrong. Second, I have found that I do not learn how to get around as well as I do when I use a map (or even a map with directions produced by MapQuest or Google Maps) and pay more attention to street signs myself and make my own decisions. Like using the portable GPS device to generally make navigation easier, IDEs and other tools can be very helpful in making it easier to develop software. However, like GPS tools, IDEs are not without their disadvantages.
The directions provided by my GPS navigation device are usually correct in terms of getting me to the desired location, but are not always the optimal directions. When I am in an area with a very complex street system that I don’t know well (such as the Greater Boston area), I am willing to make the occasional sacrifice of less optimal directions in trade for the benefit of the GPS device nearly always getting me where I need to go. If I’m in an area I know well, however, I occasionally find that I can do a little better without the GPS. In such cases, I forgo use of GPS because it is not helpful and can actually be slightly harmful.
Use of the IDE is similar to use of the GPS navigation system. Most of the time, it is a great help, but there are times when I can actually do better without it. This is especially true when it is code that I know well or code that I need to make a small already identified fix to. I find it is often easier to load that up in vim quickly and make the minor addition or edit then to wait for the IDE to load and to wait for dependencies to be built and checked.
I have found times when my GPS navigation system simply seems to “lose its mind” or runs into a situation in which it cannot be expected to “know” of extenuating circumstances. In such cases, I need to be familiar enough with reading maps and signs to adjust myself and figure out the correct route to my destination. Likewise with IDEs, there are times when the IDE simply cannot support something we need to do because it is too general for such a feature or because it just wasn’t expecting a certain type of support to be ever needed. IDEs support plug-in development to help with these specific cases, but it is not always feasible or practical or develop a new plug-in.
In the current java.net poll, a question is asked regarding use of a particular framework because it is included in an IDE. One respondent answered that it is his or her preference to not using an IDE with the desired framework support because of versioning conflicts (the version supported by the IDE is different than the version the developer needs to use). I have found situations exactly like what this respondent describes where the IDE support for a different version of my desired framework actually made it more difficult to use my version of the framework with that IDE. On the other hand, I have found the Spring integration features offered by Spring IDE and by Oracle Enterprise Pack for Eclipse to be very helpful in development with Spring framework.
A significant disadvantage (for me) of using GPS is that I don’t learn my way around a new city very well when I use GPS and let it do the “thinking” for me. This is not a big deal if I am only visiting a place once or if I can guarantee I’ll always have a working GPS navigation system with me. However, it is useful to know the area better if I will be visiting it frequently or if I may have to revisit without GPS. Also, as described above, there are times when better knowledge of the area is highly desirable because the GPS is wrong or does not provide the optimal solution.
Like GPS systems, the IDEs can lead to us knowing less about our environment. With GPS, I may be less likely to learn street names and freeway exits while with the IDE I may be less likely to learn the packaging structure of the libraries (including standard SDK) that I use often. As with the GPS system, this is less of an issue if I don’t use that library often or if I always have access to an IDE.
If I spend significant time in an area, I tend to learn my way around and need the GPS less, but for more focused things. I often don’t need the GPS at all for short and routine trips. I have found the same to be true of IDEs; I prefer not to use them for short and routine pieces of code because the overheard is not worth the very little benefit derived in those situations. For instance, both gvim and JEdit provide color coded syntax without the significant overhead associated with most full IDEs.
I recall a time not too long ago when some developers argued against use of an IDE at all. I have never subscribed to that extreme of an approach. In fact, I generally use an IDE. That being stated, I personally find there are times when vim+Ant is enough for simple, routine tasks. Of course, this in no way means that everyone finds that approach better. I think it’s silly to argue that a developer is any less of a developer because of the percentage of the time he or she uses an IDE.
Just like the GPS, we should use the IDE when it is helpful and the benefits outweigh the costs. Because the perceived benefits and perceived costs of IDE or GPS use are different for different people, it is not surprising that we all have different preferences related to level of IDE usage.
While using the GPS here in the New England area, I’ve tried to invest a little additional effort to more carefully observe the street names and exits I need so that I can still learn how to get around the area. This does take a little more effort than simply blindly following the GPS’s pleasant sounding directions, but I think it will pay off in the long-term. Similarly with an IDE, I like to understand what it is doing underneath the covers when it does things for me. In particular, I still try to learn the basic packages of well-used classes in the JDK, Java EE SDK, Flex and other things I plan to use often in the future.
Just as I would have found it much more difficult to get around the New England area without GPS even though this was not my first time here, I still find that, in general, the IDE helps me be a more productive and efficient developer in the short-term and, if used appropriately, can help me in the long-term as well. I have recently had the opportunity to work with NetBeans 6.5, JDeveloper 11g, and Eclipse 3.4 (Ganymede) and have found all to be highly useful in development. I have heard and read many rave reviews for IntelliJ IDEA as well. We are very fortunate as Java developers to have such a wide variety of mature and productive tools at our disposal. However, even these great tools can sometimes be wrong, can sometimes provide less than optimal solutions, and sometimes simply cannot support what we need to do. In such cases, it is advantageous if we have not allowed ourselves to become completely and hopelessly dependent on the tool.
The directions provided by my GPS navigation device are usually correct in terms of getting me to the desired location, but are not always the optimal directions. When I am in an area with a very complex street system that I don’t know well (such as the Greater Boston area), I am willing to make the occasional sacrifice of less optimal directions in trade for the benefit of the GPS device nearly always getting me where I need to go. If I’m in an area I know well, however, I occasionally find that I can do a little better without the GPS. In such cases, I forgo use of GPS because it is not helpful and can actually be slightly harmful.
Use of the IDE is similar to use of the GPS navigation system. Most of the time, it is a great help, but there are times when I can actually do better without it. This is especially true when it is code that I know well or code that I need to make a small already identified fix to. I find it is often easier to load that up in vim quickly and make the minor addition or edit then to wait for the IDE to load and to wait for dependencies to be built and checked.
I have found times when my GPS navigation system simply seems to “lose its mind” or runs into a situation in which it cannot be expected to “know” of extenuating circumstances. In such cases, I need to be familiar enough with reading maps and signs to adjust myself and figure out the correct route to my destination. Likewise with IDEs, there are times when the IDE simply cannot support something we need to do because it is too general for such a feature or because it just wasn’t expecting a certain type of support to be ever needed. IDEs support plug-in development to help with these specific cases, but it is not always feasible or practical or develop a new plug-in.
In the current java.net poll, a question is asked regarding use of a particular framework because it is included in an IDE. One respondent answered that it is his or her preference to not using an IDE with the desired framework support because of versioning conflicts (the version supported by the IDE is different than the version the developer needs to use). I have found situations exactly like what this respondent describes where the IDE support for a different version of my desired framework actually made it more difficult to use my version of the framework with that IDE. On the other hand, I have found the Spring integration features offered by Spring IDE and by Oracle Enterprise Pack for Eclipse to be very helpful in development with Spring framework.
A significant disadvantage (for me) of using GPS is that I don’t learn my way around a new city very well when I use GPS and let it do the “thinking” for me. This is not a big deal if I am only visiting a place once or if I can guarantee I’ll always have a working GPS navigation system with me. However, it is useful to know the area better if I will be visiting it frequently or if I may have to revisit without GPS. Also, as described above, there are times when better knowledge of the area is highly desirable because the GPS is wrong or does not provide the optimal solution.
Like GPS systems, the IDEs can lead to us knowing less about our environment. With GPS, I may be less likely to learn street names and freeway exits while with the IDE I may be less likely to learn the packaging structure of the libraries (including standard SDK) that I use often. As with the GPS system, this is less of an issue if I don’t use that library often or if I always have access to an IDE.
If I spend significant time in an area, I tend to learn my way around and need the GPS less, but for more focused things. I often don’t need the GPS at all for short and routine trips. I have found the same to be true of IDEs; I prefer not to use them for short and routine pieces of code because the overheard is not worth the very little benefit derived in those situations. For instance, both gvim and JEdit provide color coded syntax without the significant overhead associated with most full IDEs.
I recall a time not too long ago when some developers argued against use of an IDE at all. I have never subscribed to that extreme of an approach. In fact, I generally use an IDE. That being stated, I personally find there are times when vim+Ant is enough for simple, routine tasks. Of course, this in no way means that everyone finds that approach better. I think it’s silly to argue that a developer is any less of a developer because of the percentage of the time he or she uses an IDE.
Just like the GPS, we should use the IDE when it is helpful and the benefits outweigh the costs. Because the perceived benefits and perceived costs of IDE or GPS use are different for different people, it is not surprising that we all have different preferences related to level of IDE usage.
While using the GPS here in the New England area, I’ve tried to invest a little additional effort to more carefully observe the street names and exits I need so that I can still learn how to get around the area. This does take a little more effort than simply blindly following the GPS’s pleasant sounding directions, but I think it will pay off in the long-term. Similarly with an IDE, I like to understand what it is doing underneath the covers when it does things for me. In particular, I still try to learn the basic packages of well-used classes in the JDK, Java EE SDK, Flex and other things I plan to use often in the future.
Just as I would have found it much more difficult to get around the New England area without GPS even though this was not my first time here, I still find that, in general, the IDE helps me be a more productive and efficient developer in the short-term and, if used appropriately, can help me in the long-term as well. I have recently had the opportunity to work with NetBeans 6.5, JDeveloper 11g, and Eclipse 3.4 (Ganymede) and have found all to be highly useful in development. I have heard and read many rave reviews for IntelliJ IDEA as well. We are very fortunate as Java developers to have such a wide variety of mature and productive tools at our disposal. However, even these great tools can sometimes be wrong, can sometimes provide less than optimal solutions, and sometimes simply cannot support what we need to do. In such cases, it is advantageous if we have not allowed ourselves to become completely and hopelessly dependent on the tool.
Thursday, March 26, 2009
Theory Versus Practice
In theory, the first day of Spring was March 21 in the Northern Hemisphere. In practice, however, the Denver area today is seeing more winter-like weather than we've seen this past winter. The photograph shown next shows what it looks like in the Denver area as I write this. As I was out shoveling snow from the driveway and sidewalk for the third time, the pain reminded me of times in software development when "in theory" ended up costing me when transitioning to "in practice."
It seems that experienced developers are generally better at understanding the difference between actual practice and theory. Certainly, this has been true in my case as I have learned (often the hard way) that practice is seldom as easy as theory. I somewhat sheepishly recall some of the cases where I was certain that some action was trivial, but the effort ended up being substantial as unexpected surprises occurred.
With hindsight, it is often easy to look back and see that we should have realized and even expected some of the troubles that we did not think about when we were talking about "in theory." The Denver area sees blizzards like today's almost every March so for the experienced Denver area resident, there it little surprise about this springtime blizzard. Likewise, experienced software developers are more likely to have seen the wrinkles and regular obstacles that make "in practice" more difficult than "in theory."
There are many specific software development examples of "in practice" being more difficult than "in theory." One example is moving from one Java EE application server to another. In theory, this is seamless because of the compliance of these servers with the JEE specifications. However, use of JEE server-specific extensions often make this transition more difficult in practice.
Another example of "in practice" being more difficult than the anticipated "in theory" is the effect of underlying COTS products. For example, an otherwise trivial upgrade (in theory) in the version of Java used for an application can turn out hairy if underlying COTS products are not compatible with the upgrades. For example, in the days of transitioning from Java 1.3 to Java 1.4, a common problem that was not always realized or considered when thinking about the migration was the change in treatment of the unnamed package in Java. Even if developers knew their own code only used named packages for their own classes, they were sometimes surprised when the COTS products they used had or generated classes in the unnamed package. Their own classes in a named package could not access the COTS product's classes in the unnamed package.
Perhaps the most frequent example of "in theory" not transitioning well to "in practice" is when large applications are developed in a language, framework, or tool based on success or reading about success with that same product on a very small Hello World style example application. Often, these tools, languages, and frameworks don't scale particularly well in terms of ease of use when the application needs scale.
As a final example, even (perhaps especially) concepts often don't scale particularly well from "in theory" to "in practice." Although I do think I'm getting better at more quickly understanding whether the latest software development fad really has any substance to it, it is certainly true that I have been burned many times in my software development career by adopting a technology, language, methodology, or other approach that seemed really good "in theory" or "on paper," but turned out much worse or even mildly disastrous when actually put into practice.
Making decisions based on "in theory" has cost me significant time and effort. This is especially painful when someone else makes the decision based on theory despite my better judgment about likely issues that will make "in practice" much more difficult and expensive. Because of this, I try to remember that "in theory" is almost always easier and less expensive than "in practice" and therefore add time and money to the estimates for an "in theory" solution when comparing alternative solutions. The more I am familiar with a solution, the less sizable this additional buffer needs to be. This is because there is significantly less "in theory" involved when I have worked with and used the approach "in practice" already.
I don't know if it implies that we're all a bunch of optimists, but I have definitely seen repeatedly that we as software developers tend to underestimate how long it will take to accomplish certain tasks. This is especially true when we make decisions and estimates based on "theory" rather than on actual "practice."
Now I need to go back out and shovel again. The theory that it is now Spring seems so much nicer than what I am dealing with in practice.
It seems that experienced developers are generally better at understanding the difference between actual practice and theory. Certainly, this has been true in my case as I have learned (often the hard way) that practice is seldom as easy as theory. I somewhat sheepishly recall some of the cases where I was certain that some action was trivial, but the effort ended up being substantial as unexpected surprises occurred.
With hindsight, it is often easy to look back and see that we should have realized and even expected some of the troubles that we did not think about when we were talking about "in theory." The Denver area sees blizzards like today's almost every March so for the experienced Denver area resident, there it little surprise about this springtime blizzard. Likewise, experienced software developers are more likely to have seen the wrinkles and regular obstacles that make "in practice" more difficult than "in theory."
There are many specific software development examples of "in practice" being more difficult than "in theory." One example is moving from one Java EE application server to another. In theory, this is seamless because of the compliance of these servers with the JEE specifications. However, use of JEE server-specific extensions often make this transition more difficult in practice.
Another example of "in practice" being more difficult than the anticipated "in theory" is the effect of underlying COTS products. For example, an otherwise trivial upgrade (in theory) in the version of Java used for an application can turn out hairy if underlying COTS products are not compatible with the upgrades. For example, in the days of transitioning from Java 1.3 to Java 1.4, a common problem that was not always realized or considered when thinking about the migration was the change in treatment of the unnamed package in Java. Even if developers knew their own code only used named packages for their own classes, they were sometimes surprised when the COTS products they used had or generated classes in the unnamed package. Their own classes in a named package could not access the COTS product's classes in the unnamed package.
Perhaps the most frequent example of "in theory" not transitioning well to "in practice" is when large applications are developed in a language, framework, or tool based on success or reading about success with that same product on a very small Hello World style example application. Often, these tools, languages, and frameworks don't scale particularly well in terms of ease of use when the application needs scale.
As a final example, even (perhaps especially) concepts often don't scale particularly well from "in theory" to "in practice." Although I do think I'm getting better at more quickly understanding whether the latest software development fad really has any substance to it, it is certainly true that I have been burned many times in my software development career by adopting a technology, language, methodology, or other approach that seemed really good "in theory" or "on paper," but turned out much worse or even mildly disastrous when actually put into practice.
Making decisions based on "in theory" has cost me significant time and effort. This is especially painful when someone else makes the decision based on theory despite my better judgment about likely issues that will make "in practice" much more difficult and expensive. Because of this, I try to remember that "in theory" is almost always easier and less expensive than "in practice" and therefore add time and money to the estimates for an "in theory" solution when comparing alternative solutions. The more I am familiar with a solution, the less sizable this additional buffer needs to be. This is because there is significantly less "in theory" involved when I have worked with and used the approach "in practice" already.
I don't know if it implies that we're all a bunch of optimists, but I have definitely seen repeatedly that we as software developers tend to underestimate how long it will take to accomplish certain tasks. This is especially true when we make decisions and estimates based on "theory" rather than on actual "practice."
Now I need to go back out and shovel again. The theory that it is now Spring seems so much nicer than what I am dealing with in practice.
Sunday, March 22, 2009
Are You The Best Developer You Know?
Are you the best software developer you know? If you are, you might want to consider changing the situation.
I have found my most satisfying jobs to be those in which I work with people who I learn from on a daily basis. Although learning can be done by reading books, blogs, and articles, the best type of learning for me is that done as part of my everyday work. The reasons for this are that I spend a lot of my life on work (40+ hours per week), I learn best by doing, and I learn best when I learn a new technique in conjunction with trying to solve the problem it addresses. I am most likely to be exposed to new ideas, approaches, and perspectives during my daily job if I am working with and around people who are talented and experienced.
It is certainly true that one can (and I often do) learn from developers of all experience levels, but there is no question that the ratio of new things learned to time spent working is much higher when working with talented and experienced developers. One of the reasons it is nice to work with people with significant experience is that no one can know all things. I have found that when I work with highly talented and experienced developers, they are able to fill in gaps in my knowledge and I learn much from them in the areas where I am weakest.
A common belief is that it is best (financially at least) to purchase the most modest home in a nice neighborhood so that the house's resale value will be higher simply because it is in such a nice neighborhood. In other words, the perceived value of the neighborhood drags up the perceived value of the modest house. In many ways, a software developer can learn most by working with and around more experienced software developers and will, almost without any extra effort, be benefited in terms of his or her own skill set and experience.
Another analogy applies here as well. We often hear of individual athletes and even teams of athletes playing to the level of the competition. In other words, a team plays better against an equally good or better team and does not play as well against an inferior team. Likewise, I believe that one can achieve more as a software developer when he or she works with people of equal or greater skills because of a similar effect.
Besides the benefits one gains from working alongside highly experienced and talented developers, there is another reason to hope that I never think of myself as the best software developer I know. There is, of course, an issue of arrogance and overconfidence and an inability to learn when one thinks he or she already knows everything, but it also could be a symptom of anosognosia-like incompetence described in Kruger's and Dunning's 1999 classic Unskilled and Unaware of It: How Difficulties in Recognizing Own's Own Incompetence Lead to Inflated Self-Assessments. I have found that the more I learn, the more I realize how much more I still have to learn.
I love being a software developer most when I am learning new things. This is a trait I have observed in many software developers which partially explains our tendency to succumb to (mis)behavior motivations like resume-driven development, the Magpie Effect, and other borderline dysfunctional behavior motivators (see here also). The desire to learn new things can often be best satisfied without resorting to the negative results of the resume-driven development and the like by simply doing what is best for our customers while working with people from whom we can learn.
It is not easy admitting that we may not be as bright or as experienced (at least in a particular area) as the next developer. One of the downsides of having the modest house in the upscale neighborhood is the envy and keeping up with the Joneses. Just as one must remember the financial reason for having the modest home in the nicer neighborhood to bear these burdens, one can also think about the career benefits of working with and around more experienced developers.
I personally know at least one person (and often more than one and sometimes many more than one) who knows more than me in just about any area that I can think of. It is not the same person in all cases, but there almost always is someone who I look up to and wish to learn more from in any given subject or topic.
When you add more famous authors, bloggers, and others I don't know on a personal level to the mix, there are even more people for me to learn from. However, as I stated earlier, I find I learn most efficiently from working with people rather than just reading what others have written. I still learn from the latter, but I don't learn as comprehensively from reading as I do from doing.
I am a software developer who believes strongly in the concept of software craftsmanship (see also Manifesto for Software Craftsmanship). To transition from an apprentice to a craftsman requires years of hard-earned experience along with guidance from those who have already achieved craftsmanship. However, even when one thinks he or she has reached the status of craftsman, I believe one can still learn much from fellow craftsmen about improving his or her craft. I hope to continue improving my software development skills and craft, but I also hope that I never get to the point where I think I have nothing to learn from others.
I have found my most satisfying jobs to be those in which I work with people who I learn from on a daily basis. Although learning can be done by reading books, blogs, and articles, the best type of learning for me is that done as part of my everyday work. The reasons for this are that I spend a lot of my life on work (40+ hours per week), I learn best by doing, and I learn best when I learn a new technique in conjunction with trying to solve the problem it addresses. I am most likely to be exposed to new ideas, approaches, and perspectives during my daily job if I am working with and around people who are talented and experienced.
It is certainly true that one can (and I often do) learn from developers of all experience levels, but there is no question that the ratio of new things learned to time spent working is much higher when working with talented and experienced developers. One of the reasons it is nice to work with people with significant experience is that no one can know all things. I have found that when I work with highly talented and experienced developers, they are able to fill in gaps in my knowledge and I learn much from them in the areas where I am weakest.
A common belief is that it is best (financially at least) to purchase the most modest home in a nice neighborhood so that the house's resale value will be higher simply because it is in such a nice neighborhood. In other words, the perceived value of the neighborhood drags up the perceived value of the modest house. In many ways, a software developer can learn most by working with and around more experienced software developers and will, almost without any extra effort, be benefited in terms of his or her own skill set and experience.
Another analogy applies here as well. We often hear of individual athletes and even teams of athletes playing to the level of the competition. In other words, a team plays better against an equally good or better team and does not play as well against an inferior team. Likewise, I believe that one can achieve more as a software developer when he or she works with people of equal or greater skills because of a similar effect.
Besides the benefits one gains from working alongside highly experienced and talented developers, there is another reason to hope that I never think of myself as the best software developer I know. There is, of course, an issue of arrogance and overconfidence and an inability to learn when one thinks he or she already knows everything, but it also could be a symptom of anosognosia-like incompetence described in Kruger's and Dunning's 1999 classic Unskilled and Unaware of It: How Difficulties in Recognizing Own's Own Incompetence Lead to Inflated Self-Assessments. I have found that the more I learn, the more I realize how much more I still have to learn.
I love being a software developer most when I am learning new things. This is a trait I have observed in many software developers which partially explains our tendency to succumb to (mis)behavior motivations like resume-driven development, the Magpie Effect, and other borderline dysfunctional behavior motivators (see here also). The desire to learn new things can often be best satisfied without resorting to the negative results of the resume-driven development and the like by simply doing what is best for our customers while working with people from whom we can learn.
It is not easy admitting that we may not be as bright or as experienced (at least in a particular area) as the next developer. One of the downsides of having the modest house in the upscale neighborhood is the envy and keeping up with the Joneses. Just as one must remember the financial reason for having the modest home in the nicer neighborhood to bear these burdens, one can also think about the career benefits of working with and around more experienced developers.
I personally know at least one person (and often more than one and sometimes many more than one) who knows more than me in just about any area that I can think of. It is not the same person in all cases, but there almost always is someone who I look up to and wish to learn more from in any given subject or topic.
When you add more famous authors, bloggers, and others I don't know on a personal level to the mix, there are even more people for me to learn from. However, as I stated earlier, I find I learn most efficiently from working with people rather than just reading what others have written. I still learn from the latter, but I don't learn as comprehensively from reading as I do from doing.
I am a software developer who believes strongly in the concept of software craftsmanship (see also Manifesto for Software Craftsmanship). To transition from an apprentice to a craftsman requires years of hard-earned experience along with guidance from those who have already achieved craftsmanship. However, even when one thinks he or she has reached the status of craftsman, I believe one can still learn much from fellow craftsmen about improving his or her craft. I hope to continue improving my software development skills and craft, but I also hope that I never get to the point where I think I have nothing to learn from others.
Wednesday, March 18, 2009
IBM and Sun: Future of GlassFish, NetBeans, and JavaFX
The Wall Street Journal story today about IBM possibly acquiring Sun Microsystems ("IBM in Talks to Buy Sun in Bid to Add to Web Heft") is, at the highest level, nothing too new. It has been rumored for years that Sun would be a likely takeover target and Java developers have naturally wondered what would happen to Java and Java-related products if that was to happen. What today's story did do, however, is provide details that seem to take this past a rumor to a real story. The article points out that any such transaction might not complete and there are hurdles such as regulatory approval, shareholder approval, etc.
Most people seem to believe that software, including Java, will have little to no motivating influence on this alleged transaction. The article IBM/Sun deal won't be about the software, experts say does discuss what role Java might play or not play in the transaction. However, for those of us active in the Java development community, the effect of any acquisition on the Java programming language and platform (rather than the motivating power of acquiring Java) is of interest.
The Java ecosystem is large enough that it would likely continue to exist for years to come even if Sun Microsystems is acquired. However, there is no question that Sun exerts considerable leverage on the direction of the Java language and the Java platform and this direction would certainly be impacted by someone else acquiring Java along with the rest of Sun. IBM has contributed heavily to Javadom in the past with open source contributions like Eclipse, with commercial offerings such as the WebSphere application server, heavy participation in the Java Community Process (JCP), and with significant contributions to the literature with sites such as DeveloperWorks.
While an acquisition of Sun would most likely lead to subtler changes in the general Java programming language and platform direction, the effect could be much more pronounced and obvious for specific parts of Java and for specific Java-related products. Three products that would be of particular interest to me in such a scenario are NetBeans, GlassFish, and JavaFX. I am also curious about what effect this would have on JavaOne and on Java SE 7.
There are several things I really like about the NetBeans IDE, including its JavaScript and Ruby support, its integration with GlassFish, and its Swing GUI Builder. I also appreciate having another open source and freely available choice for a Java IDE. Given IBM's significant role in bringing about the Eclipse IDE and the sometimes bitter history between Eclipse and NetBeans, one has to wonder what the short-term and long-term fate of NetBeans would be if IBM acquired Sun. While NetBeans is an open source project, it it still heavily influenced and developer by Sun employees. This means that while the NetBeans project might live on, it would be much more difficult for it to continue to thrive and improve as quickly as it has in recent years without the same financial support.
I like to use the GlassFish application server because it provides such early peeks into the Java Enterprise Edition latest features and because it has been relatively straightforward to install and use. With IBM already owning its own commercial application server in WebSphere, there is some question about the level of interest in continuing investment in an open source alternative to their own product. As with NetBeans, the project could be forked, but the same risks of loss of momentum, slowing of support, and slowing of release of new features exists.
Sun has obviously invested significant time, energy, and resources into JavaFX. The topic has dominated the last two JavaOne conferences and is well represented in the catalog for the 2009 JavaOne Conference. The question is if a company acquiring Sun would have the same level of interest in JavaFX or have greater or less interest in JavaFX.
Speaking of JavaOne, an acquisition of Sun would almost certainly have an impact on this annual conference. The 2009 edition would likely change mostly in terms of discussions and unofficial functions, but future versions of the conference would likely see some dramatic shifts in focus. For example, if IBM purchased Sun, IBM presence at JavaOne would obviously be bigger than it was even in the early years of JavaOne.
Finally, with it sounding like we'll see Java SE 7 in 2010, I cannot help but wonder what effect, if any, a purchase of Sun in 2009 would have on that release. My guess is that it would only have a relatively minor effect on Java SE 7, but could potentially have a much larger effect on future versions of Java. It seems like an acquisition of Sun by IBM would almost certainly impact the Java Community Process in general.
Other Articles on the Possibility of IBM Purchasing Sun Microsystems
Java Crowd Has Mixed Views on Potential Sun-IBM Deal
IBM in Talks to Buy Sun?
Analysis of Potential Acquisition of Sun by IBM
Sun-IBM Merger: Is This Really Happening?
IBM Buying Sun Microsystems Makes No Sense: It's a Red Herring
Most people seem to believe that software, including Java, will have little to no motivating influence on this alleged transaction. The article IBM/Sun deal won't be about the software, experts say does discuss what role Java might play or not play in the transaction. However, for those of us active in the Java development community, the effect of any acquisition on the Java programming language and platform (rather than the motivating power of acquiring Java) is of interest.
The Java ecosystem is large enough that it would likely continue to exist for years to come even if Sun Microsystems is acquired. However, there is no question that Sun exerts considerable leverage on the direction of the Java language and the Java platform and this direction would certainly be impacted by someone else acquiring Java along with the rest of Sun. IBM has contributed heavily to Javadom in the past with open source contributions like Eclipse, with commercial offerings such as the WebSphere application server, heavy participation in the Java Community Process (JCP), and with significant contributions to the literature with sites such as DeveloperWorks.
While an acquisition of Sun would most likely lead to subtler changes in the general Java programming language and platform direction, the effect could be much more pronounced and obvious for specific parts of Java and for specific Java-related products. Three products that would be of particular interest to me in such a scenario are NetBeans, GlassFish, and JavaFX. I am also curious about what effect this would have on JavaOne and on Java SE 7.
There are several things I really like about the NetBeans IDE, including its JavaScript and Ruby support, its integration with GlassFish, and its Swing GUI Builder. I also appreciate having another open source and freely available choice for a Java IDE. Given IBM's significant role in bringing about the Eclipse IDE and the sometimes bitter history between Eclipse and NetBeans, one has to wonder what the short-term and long-term fate of NetBeans would be if IBM acquired Sun. While NetBeans is an open source project, it it still heavily influenced and developer by Sun employees. This means that while the NetBeans project might live on, it would be much more difficult for it to continue to thrive and improve as quickly as it has in recent years without the same financial support.
I like to use the GlassFish application server because it provides such early peeks into the Java Enterprise Edition latest features and because it has been relatively straightforward to install and use. With IBM already owning its own commercial application server in WebSphere, there is some question about the level of interest in continuing investment in an open source alternative to their own product. As with NetBeans, the project could be forked, but the same risks of loss of momentum, slowing of support, and slowing of release of new features exists.
Sun has obviously invested significant time, energy, and resources into JavaFX. The topic has dominated the last two JavaOne conferences and is well represented in the catalog for the 2009 JavaOne Conference. The question is if a company acquiring Sun would have the same level of interest in JavaFX or have greater or less interest in JavaFX.
Speaking of JavaOne, an acquisition of Sun would almost certainly have an impact on this annual conference. The 2009 edition would likely change mostly in terms of discussions and unofficial functions, but future versions of the conference would likely see some dramatic shifts in focus. For example, if IBM purchased Sun, IBM presence at JavaOne would obviously be bigger than it was even in the early years of JavaOne.
Finally, with it sounding like we'll see Java SE 7 in 2010, I cannot help but wonder what effect, if any, a purchase of Sun in 2009 would have on that release. My guess is that it would only have a relatively minor effect on Java SE 7, but could potentially have a much larger effect on future versions of Java. It seems like an acquisition of Sun by IBM would almost certainly impact the Java Community Process in general.
Other Articles on the Possibility of IBM Purchasing Sun Microsystems
Java Crowd Has Mixed Views on Potential Sun-IBM Deal
IBM in Talks to Buy Sun?
Analysis of Potential Acquisition of Sun by IBM
Sun-IBM Merger: Is This Really Happening?
IBM Buying Sun Microsystems Makes No Sense: It's a Red Herring
Monday, March 16, 2009
The Java Collections Class
One of my favorite standard Java classes is the Collections class. This is not surprising considering how often I find myself using the Java Collections Framework. Each Java Collection interface and implementation is useful in its own right, but the Collections class provides some convenience methods that are highly useful in working with Java collections.
The Javadoc API documentation for java.util.Collections explains the basics of this class such as the fact that all of its methods are static and that they all either operate on a provided collection or return a collection (here I am using "collection" more broadly to include Map as opposed to narrowly focusing on collections implementing the Collection interface). There are so many highly useful methods in this class that I am going to only focus on a subset of them to keep what is already a lengthy blog posting from becoming too large.
Empty Collections
Several of the methods provided by
The following sample code shows how one of these "empty" methods can be used and the image below the code demonstrates the UnsupportedOperationException that is thrown when the code execution tries to add an element to this empty collection that is immutable. For this particular example, I am using
demonstrateEmptySet()
Results of Running demonstrateEmptySet()
Single-Element Collections
Another functionality provided by
demonstrateSingletonList()
Results of Running demonstrateSingletonList()
I have found these various "singleton" methods to be useful for passing a single value to an API that requires a collection of that value. Of course, this works best when the code processing the passed-in value does not need to add to the collection.
Unmodifiable Collections
The methods already covered for returning empty collections and single-element collections provided these collections as unmodifiable collections. For situations in which an unmodifiable collection is desired with more than one element, appropriate methods are Collections.unmodifiableList(List), Collections.unmodifiableMap(Map), Collections.unmodifiableSet(Set), and the most general Collections.unmodifiableCollection(Collection). In addition to these, there are also methods for returning a Set or Map that is sorted in addition to being unmodifiable: Collections.unmodifiableSortedMap and Collections.unmodifiableSortedSet.
A source code example of using
demonstrateUnmodifiableMap()
Results of Running demonstrateUnmodifiableMap()
As a brief side note here, I intentionally added to this example the changing of values of the Map that underlies the unmodifiable Map to demonstrate that the source Collections upon which unmodifiable versions are returned can still be changed as needed. It is only the returned collection that is unmodifiable in the sense that it cannot have elements added to it or elements removed from it.
Checked Collections
All of the methods on the Collections class examined in this posting so far have returned unmodifiable collections as either empty, single-element, or multi-element collections. However, the Collections class is capable of much more than simply provide unmodifiable wrappers on collections. The "checked" methods [Collections.checkedCollection(Collection, Class), Collections.checkedList(List, Class), Collections.checkedMap(Map, Class), and Collections.checkedSet(Set, Class)] are useful for dealing with mixes of collections that use generic types and collections handling based one raw collections.
Before the introduction of generics with J2SE 5, we were required to find out about type problems associated with collections as we pulled an item out of a collection and cast it to the expected type at runtime. The advent of J2SE 5 generics enabled us to generally move this type mismatch detection from runtime on extraction of the items from a collection to compile time on insertion into the collection. This is highly advantageous because we can find the problem where it really occurs originally (at insertion) and because we can find it sooner (at compile time rather than at runtime).
Unfortunately, there are ways in which this type checking can be circumvented. For example, if a module out of our control accesses our collection as a raw collection, that module will be able to insert non-compliant items in the collection. Of course, we could also do the same ourselves if we're not careful or have legacy code that did not get fully ported. The Javadoc documentation for the Collections.checkedCollection method explains that these "checked" methods are also useful for debugging problems associated with ClassCastExceptions and generically typed collections by wrapping a collection instantiation in one of these calls.
To illustrate the problem that can occur when generically typed collections and raw collections are mixed, the following code intentionally does that mixture and the resulting problems are documented in the screen snapshot with its output.
demonstrateProblemWithoutCheckedCollection()
Results of demonstrateProblemWithoutCheckedCollection
This problem may look easy to address, but it is much more challenging if the mixed use of raw collections with generically typed collections is separated by many lines of code, different methods, or even different classes. In fact, the error upon access of the non-compliant collection element could happen well after its insertion in terms of both time passed and number of lines of code executed.
The use of the "checked" collection method brings some of the advantages of generically typed collections back even when raw collections are mixed. The following code sample and the screen snapshot of the output it generates are shown next.
demonstrateProblemFixedWithCheckedCollection()
Results of demonstateProblemFixedWithCheckedCollection()
Although use of the "checked" method here still results in the error being detected at runtime, it provides the advantage of detecting the error where it originally occurs (insertion) rather than some unknown amount of time later.
Enumerations and Collections
The Enumeration has been available since JDK 1.0. Because the Enumeration interface is used with several key legacy APIs, it can be useful to be able to easily convert back and forth between an
The following source code example demonstrates the conversion of an Enumeration to a List and the screen snapshot after it demonstrates its output.
demonstrateEnumerationToList()
Results of demonstrateEnumerationToList()
The next code listing and its resulting screen snapshot demonstrate converting a Collection to an Enumeration.
demonstrateCollectionToEnumeration()
Results of demonstrateCollectionToEnumeration()
List Order Change-ups
The
The "shuffle" method is used to randomly reorder items in a List.
demonstrateShuffle()
Results of demonstrateShuffle()
The "reverse" method simply reverses the order of items in a List.
demonstrateReverseList()
Results of demonstrateReverseList()
The "rotate" method rotates elements in a List by the provided number of spots.
demonstrateRotate()
Results of demonstrateRotate()
So Many More
The
Conclusion
The Collections class is one of the most valuable classes in the Java SDK. This blog posting has attempted to demonstrate some of its highly useful methods, but there are many more in addition to those shown here. Use of the
The Javadoc API documentation for java.util.Collections explains the basics of this class such as the fact that all of its methods are static and that they all either operate on a provided collection or return a collection (here I am using "collection" more broadly to include Map as opposed to narrowly focusing on collections implementing the Collection interface). There are so many highly useful methods in this class that I am going to only focus on a subset of them to keep what is already a lengthy blog posting from becoming too large.
Empty Collections
Several of the methods provided by
java.util.Collections
perform similar functionality on different types of collections. For example, the methods Collections.emptySet(), Collections.emptyMap(), and Collections.emptyList() perform the same functionality, but on Sets, Maps, and Lists respectively. In the case of these methods, they each return the appropriate collection type that is empty (no elements in it), typesafe, and immutable. In other words, the provided collection is empty and nothing can be added to it. As I have blogged about previously, this is useful for implementing the recommendation of Effective Java to return empty collections rather than null.The following sample code shows how one of these "empty" methods can be used and the image below the code demonstrates the UnsupportedOperationException that is thrown when the code execution tries to add an element to this empty collection that is immutable. For this particular example, I am using
Collections.emptySet()
, but the principle is the same for the List
and Map
versions.demonstrateEmptySet()
/**
* Provide an empty set.
*/
public void demonstrateEmptySet()
{
log("===== DEMONSTRATING EMPTY SET =====", System.out);
final Set<String> emptySet = Collections.emptySet();
log("Size of returned emptySet(): " + emptySet.size(), System.out);
log("----- Adding String to Collections.emptySet() returned Set -----", System.out);
emptySet.add("A new String to add.");
}
Results of Running demonstrateEmptySet()
Single-Element Collections
Another functionality provided by
Collections
for Set
, List
, and Map
is providing of a single-element collection that, like its empty element sibling, is immutable and typesafe. To illustrate this, the next code sample and output screen snapshot will demonstrate use of Collections.singletonList(T), though the same principles apply to Collections.singletonMap(K,V) and Collections.singleton(T) (no "Set" in method name is not an accidental omission on my part though that method does apply to the Set).demonstrateSingletonList()
/**
* Provide a Set with a single element.
*/
public void demonstrateSingletonList()
{
log("===== DEMONSTRATING SINGLETON LIST ======", System.out);
final List<String> singleElementList =
Collections.singletonList("A single String to add.");
log( "Size of returned singletonList(): "
+ singleElementList.size()
+ NEW_LINE,
System.out);
log(
"----- Adding String to Collections.singletonList() returned List -----",
System.out);
singleElementList.add("Another String to add.");
}
Results of Running demonstrateSingletonList()
I have found these various "singleton" methods to be useful for passing a single value to an API that requires a collection of that value. Of course, this works best when the code processing the passed-in value does not need to add to the collection.
Unmodifiable Collections
The methods already covered for returning empty collections and single-element collections provided these collections as unmodifiable collections. For situations in which an unmodifiable collection is desired with more than one element, appropriate methods are Collections.unmodifiableList(List), Collections.unmodifiableMap(Map), Collections.unmodifiableSet(Set), and the most general Collections.unmodifiableCollection(Collection). In addition to these, there are also methods for returning a Set or Map that is sorted in addition to being unmodifiable: Collections.unmodifiableSortedMap and Collections.unmodifiableSortedSet.
A source code example of using
Collections.unmodifiableMap(Map)
and the results of running that example are shown next.demonstrateUnmodifiableMap()
/**
* Demonstrate use of Collections.unmodifiableMap(). Also demonstrates
* how the underlying collection (or Map in this case) can be changed even
* when set to an unmodifiable version.
*/
public void demonstrateUnmodifiableMap()
{
log("===== DEMONSTRATING UNMODIFIABLE MAP =====", System.out);
final Map<MovieGenre, String> unmodifiableMap =
Collections.unmodifiableMap(this.favoriteGenreMovies);
log(
"Map BEFORE MODIFICATION: " + NEW_LINE + unmodifiableMap.toString(),
System.out);
log(
"----- Putting a new value in Map for existing key in underlying Map. -----",
System.out);
this.favoriteGenreMovies.put(MovieGenre.JAMES_BOND, "Thunderball");
log( "The Unmodifiable Map AFTER MODIFICATION: " + NEW_LINE
+ unmodifiableMap.toString(), System.out);
log(
"----- Putting a completely new key in the underlying map. -----",
System.out);
this.favoriteGenreMovies.put(MovieGenre.MYSTERY, "The Usual Suspects");
log( "The Unmodifiable Map AFTER MODIFICATION: " + NEW_LINE
+ unmodifiableMap.toString(), System.out);
log(
"----- Now try to 'put' to the unmodifiable wrapper collection -----",
System.out);
unmodifiableMap.put(MovieGenre.MYSTERY, "Rear Window");
}
Results of Running demonstrateUnmodifiableMap()
As a brief side note here, I intentionally added to this example the changing of values of the Map that underlies the unmodifiable Map to demonstrate that the source Collections upon which unmodifiable versions are returned can still be changed as needed. It is only the returned collection that is unmodifiable in the sense that it cannot have elements added to it or elements removed from it.
Checked Collections
All of the methods on the Collections class examined in this posting so far have returned unmodifiable collections as either empty, single-element, or multi-element collections. However, the Collections class is capable of much more than simply provide unmodifiable wrappers on collections. The "checked" methods [Collections.checkedCollection(Collection, Class), Collections.checkedList(List, Class), Collections.checkedMap(Map, Class), and Collections.checkedSet(Set, Class)] are useful for dealing with mixes of collections that use generic types and collections handling based one raw collections.
Before the introduction of generics with J2SE 5, we were required to find out about type problems associated with collections as we pulled an item out of a collection and cast it to the expected type at runtime. The advent of J2SE 5 generics enabled us to generally move this type mismatch detection from runtime on extraction of the items from a collection to compile time on insertion into the collection. This is highly advantageous because we can find the problem where it really occurs originally (at insertion) and because we can find it sooner (at compile time rather than at runtime).
Unfortunately, there are ways in which this type checking can be circumvented. For example, if a module out of our control accesses our collection as a raw collection, that module will be able to insert non-compliant items in the collection. Of course, we could also do the same ourselves if we're not careful or have legacy code that did not get fully ported. The Javadoc documentation for the Collections.checkedCollection method explains that these "checked" methods are also useful for debugging problems associated with ClassCastExceptions and generically typed collections by wrapping a collection instantiation in one of these calls.
To illustrate the problem that can occur when generically typed collections and raw collections are mixed, the following code intentionally does that mixture and the resulting problems are documented in the screen snapshot with its output.
demonstrateProblemWithoutCheckedCollection()
/**
* Demonstrate problems that can occur when Collections.checkedCollection is
* not used.
*/
private void demonstrateProblemWithoutCheckedCollection()
{
log("1. Demonstrate problem of no checked collection.", System.out);
final Integer arbitraryInteger = new Integer(4);
final List rawList = this.favoriteBooks;
rawList.add(arbitraryInteger);
final List<String> stringList = rawList;
for (final String element : stringList)
{
log(element.toUpperCase(), System.out);
}
}
Results of demonstrateProblemWithoutCheckedCollection
This problem may look easy to address, but it is much more challenging if the mixed use of raw collections with generically typed collections is separated by many lines of code, different methods, or even different classes. In fact, the error upon access of the non-compliant collection element could happen well after its insertion in terms of both time passed and number of lines of code executed.
The use of the "checked" collection method brings some of the advantages of generically typed collections back even when raw collections are mixed. The following code sample and the screen snapshot of the output it generates are shown next.
demonstrateProblemFixedWithCheckedCollection()
/**
* Demonstrate how Collections.checkedCollection helps the problem.
*/
private void demonstrateProblemFixedWithCheckedCollection()
{
log("2. Demonstrate problem fixed with checked collection", System.out);
final Integer arbitraryInteger = new Integer(4);
final List<String> checkedList = Collections.checkedList(this.favoriteBooks, String.class);
final List rawList = checkedList;
rawList.add(arbitraryInteger);
final List<String> stringList = rawList;
for (final String element : stringList)
{
log(element.toUpperCase(), System.out);
}
}
Results of demonstateProblemFixedWithCheckedCollection()
Although use of the "checked" method here still results in the error being detected at runtime, it provides the advantage of detecting the error where it originally occurs (insertion) rather than some unknown amount of time later.
Enumerations and Collections
The Enumeration has been available since JDK 1.0. Because the Enumeration interface is used with several key legacy APIs, it can be useful to be able to easily convert back and forth between an
Enumeration
and a collection. Two methods that support this conversion are Collections.list(Enumeration) [converts the provided Enumeration into a List] and Collections.enumeration(Collection) [provides an enumeration over a Collection].The following source code example demonstrates the conversion of an Enumeration to a List and the screen snapshot after it demonstrates its output.
demonstrateEnumerationToList()
/**
* Demonstrate use of Collections.list(Enumeration).
*/
public void demonstrateEnumerationToList()
{
log("===== Demonstrate Collections.list(Enumeration) =====", System.out);
final Enumeration properties = System.getProperties().propertyNames();
final List propertiesList = Collections.list(properties);
log(propertiesList.toString(), System.out);
}
Results of demonstrateEnumerationToList()
The next code listing and its resulting screen snapshot demonstrate converting a Collection to an Enumeration.
demonstrateCollectionToEnumeration()
/**
* Demonstrate use of Collections.enumeration(Collection).
*/
public void demonstrateCollectionToEnumeration()
{
log("===== Demonstrate Collections.enumeration(Collection) =====", System.out);
final Enumeration books = Collections.enumeration(this.favoriteBooks);
while (books.hasMoreElements())
{
log(books.nextElement().toString(), System.out);
}
}
Results of demonstrateCollectionToEnumeration()
List Order Change-ups
The
Collections
class supports randomly reordering a List [two versions of Collections.shuffle method], reversing the order of a List [Collections.reverse(List) method], and rotating entries in a list by a prescribed number of entries [Collections.rotate(List, int) method]. Code samples and the associated screen snapshots for each of these follows.The "shuffle" method is used to randomly reorder items in a List.
demonstrateShuffle()
/**
* Demonstrate Collections.shuffle(List).
*/
public void demonstrateShuffle()
{
log("===== Demonstrate Collections.shuffle(List) =====", System.out);
log("Books BEFORE shuffle: " + NEW_LINE + this.favoriteBooks, System.out);
Collections.shuffle(this.favoriteBooks);
log("Books AFTER shuffle: " + NEW_LINE + this.favoriteBooks, System.out);
}
Results of demonstrateShuffle()
The "reverse" method simply reverses the order of items in a List.
demonstrateReverseList()
/**
* Demonstrate use of Collections.reverse(List).
*/
public void demonstrateReverseList()
{
log("===== Demonstrate Collections.reverse(List) =====", System.out);
log("List BEFORE reverse:" + NEW_LINE + this.favoriteBooks, System.out);
Collections.reverse(this.favoriteBooks);
log("List AFTER reverse:" + NEW_LINE + this.favoriteBooks, System.out);
}
Results of demonstrateReverseList()
The "rotate" method rotates elements in a List by the provided number of spots.
demonstrateRotate()
/**
* Demonstrate Collections.rotate(List, int).
*/
public void demonstrateRotate()
{
log("===== Demonstrate Collections.rotate(List, int) =====", System.out);
log("Books BEFORE rotation: " + NEW_LINE + this.favoriteBooks, System.out);
Collections.rotate(this.favoriteBooks, 3);
log("Books AFTER rotation: " + NEW_LINE + this.favoriteBooks, System.out);
}
Results of demonstrateRotate()
So Many More
The
Collections
class provides significantly more functionality than even that shown here. It includes methods that support collection wrappers that can be used in concurrent environments [such as Collections.synchronizedCollection(Collection)], support filling a List with a particular item (Collections.fill), support counting the number of times a particular object exists in a Collection (Collections.frequency), sorting, searching, swapping, and several more types of functionality.Conclusion
The Collections class is one of the most valuable classes in the Java SDK. This blog posting has attempted to demonstrate some of its highly useful methods, but there are many more in addition to those shown here. Use of the
Collections
class not only makes working with Java collections easier, but it also provides support for best practices related to Java collection use.
Saturday, March 14, 2009
Running Individual JUnit Unit Tests from Command-line Using NetBeans build.xml File
The NetBeans IDE provides JUnit integration that can be very handy when writing and running JUnit-based unit tests. However, I like to be able to do anything I might do often outside of the IDE as well as from within the IDE. In particular, there are times when I want to do things from the command-line without the need to open up the IDE.
It can be very useful to execute a single JUnit-based unit test class rather than executing the entire unit test suite. This is easy to do in the IDE itself, but it is something I also want to do from the command-line. NetBeans supports executing a single unit test class from the command-line, but I have found that you need to be aware of a few tricks to do this. This blog posting covers the minor things one needs to know to run individual JUnit-based unit tests from the command-line using an Ant build.xml file generated by NetBeans 6.1 or NetBeans 6.5 for a standard Java Application project.
When using the NetBeans New Project wizard to create a Java Application project, one gets an Ant-compliant build.xml file as the main project build file, but most of the real work is delegated to the build-impl.xml file that is generated and placed in the nbproject subdirectory of the main project working directory.
Before demonstrating how to run an individual Test Class from the command-line line, I'll first look at how to do it through the IDE. For the IDE and command-line examples, I will be using two classes two be tested and two test classes that will test those classes. The directory structure for this project with the test classes and the classes to be tested is shown next in this screen snapshot of the NetBeans "Projects" window.
The image above shows that this NetBeans project (which was created previously as a Java Application project using the NetBeans New Project creation wizard) is called "IndividualTesting." More importantly, this image shows the two main source classes (Adder and Multiplier in the "Source Packages" area) and their respective test classes (AdderTest and MultiplierTest in the "Test Packages" area). The image also shows that Java SE 6 and JUnit 4.5 are being used.
The source code for the classes (source and test) displayed above is shown next.
Adder.java (Class to be tested)
Multiplier.java (Class to be tested)
AdderTest.java (Class to test Adder)
MultiplierTest.java (Class to test Multiplier)
With the source code written, the JUnit-based unit tests written, and the NetBeans Java Application project set up as shown above, it is almost trivial to run the unit tests. All one needs to do is use ALT+F6 or select Run->Test Project to run all the tests for that particular NetBeans project. The next two screen snapshots show how to run all of the tests and what the results look like.
Running All Project's Unit Tests
Results of All Project's Unit Tests Displayed in NetBeans
We can see that most of the tests passed, though the test that was intentionally rigged to fail (to serve our illustration needs) did lead to a report of a single test failure. We may have reason at this point to only run a single test. For example, we might want to fix a given test and only run it rather than run all the tests again. This would be an advantage in a more realistic situation where we have many more unit test classes than the two shown here and don't want to run all of them all of the time.
From NetBeans, we can run an individual test by right-clicking on that particular test result's output and selecting "Run Again." This is demonstrated for one of the successful tests and for the failed test in the next two screen snapshots.
Running Individual Test (Successful Test)
Running Individual Test (Failed Test)
When the two individual tests are re-run individually, their respective output is shown in the next two screen snapshots.
Results of Re-running Successful Individual Test
Results of Re-running Failed Individual Test
As has been demonstrated so far, it is really easy to run JUnit-based unit tests as a group or individually against the desired test method. As stated earlier, there are times when this behavior is desired from the command-line. Running the entire suite of tests is easy and is done by simply invoking the targets
When one wishes to run a unit test individually from the command-line, there are a few additional details to know. From looking at the
From the end of this output (that portion displayed in the above screen snapshot: "Must select some files in the IDE or set javac.includes"), it is clear that when this particular target is not executed within the NetBeans IDE, it requires a property
An easy method to provide the
Executing the above line sets the
With the
The same value can be specified for the
The following screen snapshot displays the end of the properly executed individual test run.
From this screen snapshot, we see the results only for the AdderTest class (test suite) we specified in the
As with all output of NetBeans-enabled JUnit test runs, significantly more output is available in the project's
The generated XML file holding the results of the JUnit-based unit tests consists of a structure that looks something like this (I added the XML comment explaining that there are typically far more properties specified in these files, but they were left out here for brevity and clarity):
As the XML sample above indicates, the full result information is available in this file. Because it is well-formed XML, there are many tools and approaches one could use to view this data. The XML data can be viewed directly in a text editor, viewed in an XML tool that will color code it and indent it appropriately, translated with XSLT to another format, processed with Java XML parsing approaches such as JAXB, or viewed/processed with many other approaches.
Because we are using Ant and JUnit, the easiest method for viewing the test results is to take advantage of the optional Ant junitreport task. This is easily added to the build.xml file as a new target ("create-unit-test-report") as shown next:
When this target is run, output similar to that shown in the next screen snapshot is seen.
In this case, the HTML generated via XSLT transformation of the unit test XML output is available under the project's newly created
The Properties link in the bottom right corner of the web page shown in the last image can be clicked on to see the lengthy list of properties used in the NetBeans project in a more user-friendly format.
Conclusion
NetBeans makes it easy to run all units tests in a project or specific JUnit-based unit test suites and tests individually. It is not much more difficult to run individual unit test suites using the command-line as long as the
It can be very useful to execute a single JUnit-based unit test class rather than executing the entire unit test suite. This is easy to do in the IDE itself, but it is something I also want to do from the command-line. NetBeans supports executing a single unit test class from the command-line, but I have found that you need to be aware of a few tricks to do this. This blog posting covers the minor things one needs to know to run individual JUnit-based unit tests from the command-line using an Ant build.xml file generated by NetBeans 6.1 or NetBeans 6.5 for a standard Java Application project.
When using the NetBeans New Project wizard to create a Java Application project, one gets an Ant-compliant build.xml file as the main project build file, but most of the real work is delegated to the build-impl.xml file that is generated and placed in the nbproject subdirectory of the main project working directory.
Before demonstrating how to run an individual Test Class from the command-line line, I'll first look at how to do it through the IDE. For the IDE and command-line examples, I will be using two classes two be tested and two test classes that will test those classes. The directory structure for this project with the test classes and the classes to be tested is shown next in this screen snapshot of the NetBeans "Projects" window.
The image above shows that this NetBeans project (which was created previously as a Java Application project using the NetBeans New Project creation wizard) is called "IndividualTesting." More importantly, this image shows the two main source classes (Adder and Multiplier in the "Source Packages" area) and their respective test classes (AdderTest and MultiplierTest in the "Test Packages" area). The image also shows that Java SE 6 and JUnit 4.5 are being used.
The source code for the classes (source and test) displayed above is shown next.
Adder.java (Class to be tested)
package dustin;
/**
* Simple class to be tested that by coincidence performs addition functionality.
*
* @author Dustin
*/
public class Adder
{
/** No-arguments constructor. */
public Adder() {}
/**
* Sum the provided integers.
*
* @param augend First integer to be added.
* @param addend Second integer to be added.
* @param addends Remaining integers, in any, to be added.
* @return Sum of provided integers.
*/
public int add(final int augend, final int addend, final int ... addends)
{
int sum = augend + addend;
for (final int individualAddend : addends)
{
sum += individualAddend;
}
return sum;
}
}
Multiplier.java (Class to be tested)
package dustin;
/**
* Simple class to be tested that by coincidence performs multiplication functionality.
*
* @author Dustin
*/
public class Multiplier
{
/**
* Multiply the provided factors.
*
* @param factor1 First factor to be multiplied.
* @param factor2 Second factor to be multiplied.
* @param factors Remaining factors to be multiplied.
* @return Product of factors multiplication.
*/
public int multiply(final int factor1, final int factor2, final int ... factors)
{
int product = factor1 * factor2;
for (final int individualFactor : factors)
{
product *= individualFactor;
}
return product;
}
}
AdderTest.java (Class to test Adder)
package dustin;
import org.junit.Assert;
import org.junit.Test;
/**
* Test for class dustin.Adder.
*
* @author Dustin
*/
public class AdderTest extends Adder
{
public AdderTest() {}
@Test
public void testAddWithTwoAddends()
{
final int expectedSum = 7;
final int resultSum = add(3,4);
Assert.assertEquals(
"Sum of two added integers does not match expected result.",
expectedSum, resultSum);
}
@Test
public void testAddWithThreeAddends()
{
final int expectedSum = 11;
final int resultSum = add(3,4,4);
Assert.assertEquals(
"Sum of three added integers does not match expected result.",
expectedSum, resultSum);
}
@Test
public void testAddWithFourAddends()
{
final int expectedSum = 14;
final int resultSum = add(3,4,4,3);
Assert.assertEquals(
"Sum of four added integers does not match expected result.",
expectedSum, resultSum);
}
@Test
public void testAddWithTwoNegativeNumbers()
{
final int expectedSum = -10;
final int resultSum = add(-6,-4);
Assert.assertEquals(
"Sum of two negative integers does not match expected result.",
expectedSum, resultSum);
}
@Test
public void testWithIntentionalError()
{
final int expectedSum = 27;
final int resultSum = add(9,3);
Assert.assertEquals(
"The two provided numbers do not add to what was expected.",
expectedSum, resultSum);
}
}
MultiplierTest.java (Class to test Multiplier)
package dustin;
import org.junit.Assert;
import org.junit.Test;
/**
* Test for class dustin.Multiplier.
*
* @author Dustin
*/
public class MultiplierTest extends Multiplier
{
/** No-arguments constructor. */
public MultiplierTest() {}
@Test
public void testMultiplyTwoIntegers()
{
final int expectedProduct = 15;
final int resultProduct = multiply(3,5);
Assert.assertEquals(
"Product of multiplication of two integers does not match.",
expectedProduct, resultProduct);
}
@Test
public void testMultiplyTwoNegativeIntegers()
{
final int expectedProduct = 20;
final int resultProduct = multiply(-4,-5);
Assert.assertEquals(
"Product of multiplication of two negative integers does not match.",
expectedProduct, resultProduct);
}
@Test
public void testMultiplyTwoMixedSignIntegers()
{
final int expectedProduct = -12;
final int resultProduct = multiply(-3,4);
Assert.assertEquals(
"Product of multiplication of two integers of mixed sign does not match.",
expectedProduct, resultProduct);
}
}
With the source code written, the JUnit-based unit tests written, and the NetBeans Java Application project set up as shown above, it is almost trivial to run the unit tests. All one needs to do is use ALT+F6 or select Run->Test Project to run all the tests for that particular NetBeans project. The next two screen snapshots show how to run all of the tests and what the results look like.
Running All Project's Unit Tests
Results of All Project's Unit Tests Displayed in NetBeans
We can see that most of the tests passed, though the test that was intentionally rigged to fail (to serve our illustration needs) did lead to a report of a single test failure. We may have reason at this point to only run a single test. For example, we might want to fix a given test and only run it rather than run all the tests again. This would be an advantage in a more realistic situation where we have many more unit test classes than the two shown here and don't want to run all of them all of the time.
From NetBeans, we can run an individual test by right-clicking on that particular test result's output and selecting "Run Again." This is demonstrated for one of the successful tests and for the failed test in the next two screen snapshots.
Running Individual Test (Successful Test)
Running Individual Test (Failed Test)
When the two individual tests are re-run individually, their respective output is shown in the next two screen snapshots.
Results of Re-running Successful Individual Test
Results of Re-running Failed Individual Test
As has been demonstrated so far, it is really easy to run JUnit-based unit tests as a group or individually against the desired test method. As stated earlier, there are times when this behavior is desired from the command-line. Running the entire suite of tests is easy and is done by simply invoking the targets
compile-test
and test
to compile the unit tests and run the unit tests respectively. The default targets for an Ant-based build of a NetBeans project are shown in the next screen snapshot.When one wishes to run a unit test individually from the command-line, there are a few additional details to know. From looking at the
build-impl.xml
file generated by NetBeans (or by looking at the listed targets in the screen snapshot above), it is evident that one can invoke the test-single
target to run an individual test. When one tries to do this with a command line ant test-single
, the following output is experienced.From the end of this output (that portion displayed in the above screen snapshot: "Must select some files in the IDE or set javac.includes"), it is clear that when this particular target is not executed within the NetBeans IDE, it requires a property
javac.includes
to be set.An easy method to provide the
javac.includes
property is to pass it as a name/value pair using the -D argument passed to the ant command. For example, to provide this to run the test "testWithIntentionalError," we can do so like this:
ant -Djavac.includes=dustin\AdderTest\testWithIntentionalError single-test
Executing the above line sets the
javac.includes
property to the package name and method name of the individual test to be executed.With the
javac.includes
property specified, we see a different result as demonstrated in the next screen snapshot. The message is another error message and is again pretty clear: "Must select some files in the IDE or set test.includes".The same value can be specified for the
test.includes
property as was specified for the javac.includes
property. In this case, because we want to re-run the individual "testWithIntentionalError," we would use the following command:
ant -Djavac.includes=dustin\AdderTest\testWithIntentionalError -Dtest.includes=dustin\AdderTest\testWithIntentionalError test-single
The following screen snapshot displays the end of the properly executed individual test run.
From this screen snapshot, we see the results only for the AdderTest class (test suite) we specified in the
javac.includes
and test.includes
properties. This can be much quicker than waiting for all the test suites to run if we have a large number of them.As with all output of NetBeans-enabled JUnit test runs, significantly more output is available in the project's
build/test/results
directory in a file named TEST-dustin.AdderTest.xml
(XML file named after the package and test class).The generated XML file holding the results of the JUnit-based unit tests consists of a structure that looks something like this (I added the XML comment explaining that there are typically far more properties specified in these files, but they were left out here for brevity and clarity):
<testsuite errors="0" failures="1" hostname="MARX-PC" name="dustin.AdderTest" tests="5" time="0.109" timestamp="2009-03-15T00:24:06">
<properties>
<!-- Only a select sample of property settings are shown here. All of the
properties associated with the NetBeans project are declared as
name/value property pairs here in the same way that the select property
values below are declared. -->
<property name="ant.file.IndividualTesting-impl" value="C:\java\examples\IndividualTesting\nbproject\build-impl.xml" />
<property name="libs.proguard.javadoc" value="" />
<property name="ant.library.dir" value="C:\apache-ant-1.7.0-bin\apache-ant-1.7.0\lib" />
<property name="libs.Spring-2-5-6.src" value="" />
<property name="libs.junit.javadoc" value="C:\Program Files\NetBeans 6.5\java2\docs\junit-3.8.2-api.zip" />
<property name="javac.includes" value="dustin/AdderTest.java" />
<property name="user.name" value="Dustin" />
</properties>
<testcase classname="dustin.AdderTest" name="testAddWithTwoAddends" time="0.016" />
<testcase classname="dustin.AdderTest" name="testAddWithThreeAddends" time="0.0" />
<testcase classname="dustin.AdderTest" name="testAddWithFourAddends" time="0.0" />
<testcase classname="dustin.AdderTest" name="testAddWithTwoNegativeNumbers" time="0.0" />
<testcase classname="dustin.AdderTest" name="testWithIntentionalError" time="0.0">
<failure message="The two provided numbers do not add to what was expected. expected:<27> but was:<12>" type="junit.framework.AssertionFailedError">junit.framework.AssertionFailedError: The two provided numbers do not add to what was expected. expected:<27> but was:<12>
at dustin.AdderTest.testWithIntentionalError(AdderTest.java:60)
</failure>
</testcase>
<system-out><![CDATA[]]></system-out>
<system-err><![CDATA[]]></system-err>
</testsuite>
As the XML sample above indicates, the full result information is available in this file. Because it is well-formed XML, there are many tools and approaches one could use to view this data. The XML data can be viewed directly in a text editor, viewed in an XML tool that will color code it and indent it appropriately, translated with XSLT to another format, processed with Java XML parsing approaches such as JAXB, or viewed/processed with many other approaches.
Because we are using Ant and JUnit, the easiest method for viewing the test results is to take advantage of the optional Ant junitreport task. This is easily added to the build.xml file as a new target ("create-unit-test-report") as shown next:
<target name="create-unit-test-report"
description="Generate reports for executed JUnit unit tests.">
<mkdir dir="report" />
<junitreport todir="./report">
<fileset dir="./build/test/results">
<include name="TEST-*.xml"/>
</fileset>
<report format="frames" todir="./report/html"/>
</junitreport>
</target>
When this target is run, output similar to that shown in the next screen snapshot is seen.
In this case, the HTML generated via XSLT transformation of the unit test XML output is available under the project's newly created
report/html
directory. When the index.html file in that directory is brought up in a web browser, it appears as shown in the two images that follow.The Properties link in the bottom right corner of the web page shown in the last image can be clicked on to see the lengthy list of properties used in the NetBeans project in a more user-friendly format.
Conclusion
NetBeans makes it easy to run all units tests in a project or specific JUnit-based unit test suites and tests individually. It is not much more difficult to run individual unit test suites using the command-line as long as the
javac.includes
and test.includes
properties are specified when Ant is used to run the test-single
target. More aesthetically pleasing output can be obtained by using Ant's junitreport
tag to translate the XML output to a more desirable format such as the default HTML representation. For additional details on using NetBeans with JUnit, see Writing JUnit Tests in NetBeans IDE.
Friday, March 13, 2009
Java String Literals: No String Constructor Required
I think that most experienced Java developers are aware of many of the many characteristics of the Java String that make it a little different than other objects. One particular nuance of the Java String that people new to Java sometimes don't fully appreciate is that a literal String is already a String object.
When first learning Java it is really easy to write a String assignment like this:
This will compile and the initialized String blogUrlString will support any needs one might expect from a String. However, the downside of this particular statement is there are actually two String instantiations in this case and one of them is unnecessary. Because the String literal "http://marxsoftware.blogspot.com/" is already a full-fledged Java String, the new operator is unnecessary and results in an extraneous instantiation. The code above can be re-written as follows:
The unnecessary String instantiation demonstrated first will lead to reduced performance in Java applications. If the extraneous instantiation occurs in limited cases outside of loops, it is likely not to be a significant performance degradation. However, if it occurs within a loop, its performance impact can be much more significant. However, even when the performance issue is only slight, I still find the extra "new" instantiation to be less readable than the second method shown above.
Joshua Bloch uses an example similar to mine above to illustrate Item 5 ("Avoid Creating Unnecessary Objects") in the Second Edition of Effective Java. He points out that this extra instantiation in frequently called code can lead to performance problems.
To demonstrate the effect of this unnecessary extra instantiation of a String, I put together the following simple class (with a nested member class and a nested enum). The full code for it appears next.
RedundantStringExample.java
For the very simple code example used in the tests above, I needed to run the tests with many loops to see truly dramatic differences. However, the performance difference was obvious. I ran the tests several times for each test and averaged the results. In general, when the loops were large enough to differentiate significant differences, I found the method using the extraneous String instantiation to take roughly four times as long to execute as the loops using the String literal directly without the extra "new."
Although I ran each test on each number of loops, I show just one representative sample run for a few key data points in the following screen capture. I mark the results of running tests with 1 million loops in yellow and running with 10 million loops in red.
There are many cases in which the extra String instantiation demonstrated above might not have any significant performance impact. However, there is no positive of specifying an extra String instantiation and there is a negative in addition to reduced performance related to the extra code clutter.
Note that the examples above extend to similar String uses. Here is another slightly altered example.
Finally, as a reminder for anyone new to Java and Java Strings, if you find yourself assembling a large String from a large number of pieces, you will typically be better off using a StringBuilder or StringBuffer instead of a String. The root cause for this again has to do with too many String instantiations.
The Java String's behavior can seem a little strange until one gets used to it and even then it still might seem a little strange. The main point to remember related to this blog posting is that String literals are full-fledged String objects and so do not require the String constructor to be explicitly invoked.
Additional Resources
Java Tutorial: Strings
String Constructor Considered Useless Turns Out to be Useful After All
Use of the String(String) Constructor in Java
What is the Purpose of the Expression "new String(...)" in Java?
Java String@Everything2.com
When first learning Java it is really easy to write a String assignment like this:
// Unnecessary and redundant String instantiation
String blogUrlString = new String("http://marxsoftware.blogspot.com/");
This will compile and the initialized String blogUrlString will support any needs one might expect from a String. However, the downside of this particular statement is there are actually two String instantiations in this case and one of them is unnecessary. Because the String literal "http://marxsoftware.blogspot.com/" is already a full-fledged Java String, the new operator is unnecessary and results in an extraneous instantiation. The code above can be re-written as follows:
// The 'new' keyword is not needed because the literal String is a full String object
String blogUrlString = "http://marxsoftware.blogspot.com/";
The unnecessary String instantiation demonstrated first will lead to reduced performance in Java applications. If the extraneous instantiation occurs in limited cases outside of loops, it is likely not to be a significant performance degradation. However, if it occurs within a loop, its performance impact can be much more significant. However, even when the performance issue is only slight, I still find the extra "new" instantiation to be less readable than the second method shown above.
Joshua Bloch uses an example similar to mine above to illustrate Item 5 ("Avoid Creating Unnecessary Objects") in the Second Edition of Effective Java. He points out that this extra instantiation in frequently called code can lead to performance problems.
To demonstrate the effect of this unnecessary extra instantiation of a String, I put together the following simple class (with a nested member class and a nested enum). The full code for it appears next.
RedundantStringExample.java
package dustin.examples;
import java.util.ArrayList;
import java.util.List;
/**
* Example demonstrating effect of redundant String instantiation.
*/
public class RedundantStringExample
{
/** Operating System-independent New line character. */
private static final String NEW_LINE = System.getProperty("line.separator");
/** List of Strings. */
private List<String> strings = new ArrayList<String>();
/** No-arguments constructor. */
public RedundantStringExample() {}
/**
* Test performance in loop over single String instantiation that is
* executed the number of times as provided by the passed-in argument.
*
* @param numberOfLoops Number of times to instantiate Single String.
* @return Results of this test.
*/
public TestResult testSingleString(final int numberOfLoops)
{
final TestResult result = new TestResult(numberOfLoops, TestType.SINGLE);
result.startTimer();
for (int counter = 0; counter < numberOfLoops; counter++)
{
strings.add("http://marxsoftware.blogspot.com/");
}
result.stopTimer();
return result;
}
/**
* Test performance in loop over redundant String instantiations that is
* executed the number of times as provided by the passed-in argument.
*
* @param numberOfLoops Number of times to instantiate Single String.
* @return Results of this test.
*/
public TestResult testRedundantStrings(final int numberOfLoops)
{
final TestResult result = new TestResult(numberOfLoops, TestType.REDUNDANT);
result.startTimer();
for (int counter = 0; counter < numberOfLoops; counter++)
{
strings.add(new String("http://marxsoftware.blogspot.com/"));
}
result.stopTimer();
return result;
}
/**
* Run the examples based on provided command-line arguments.
*
* @param arguments Command-line arguments where the first argument should
* be an integer (not decimal) numeral.
*/
public static void main(final String[] arguments)
{
final int numberArguments = arguments.length;
if (numberArguments < 2)
{
System.err.println("Please provide two command-line arguments:");
System.err.println("\tIntegral number of times to instantiate Strings");
System.err.println("\tType of test to run ('redundant', 'constant', or 'single')");
System.exit(-2);
}
final int numberOfExecutions = Integer.valueOf(arguments[0]);
final String testChoice = arguments[1];
if (testChoice == null || testChoice.isEmpty())
{
System.err.println("The second argument must be a test choice.");
System.exit(-1);
}
final RedundantStringExample me = new RedundantStringExample();
TestResult testResult = null;
if (testChoice.equalsIgnoreCase("redundant"))
{
testResult = me.testRedundantStrings(numberOfExecutions);
}
else // testChoice is "single" or something unexpected
{
testResult = me.testSingleString(numberOfExecutions);
}
System.out.println(testResult);
}
/**
* Class used to pass test results back to caller.
*/
private static class TestResult
{
/** Number of milliseconds per second. */
private static final long MILLISECONDS_PER_SECOND = 1000;
/** Number of String instantiations. */
private int numberOfExecutions;
/** Type of test this result applies to. */
private TestType testType;
/** Test begining time. */
private long startTime = -1L;
/** Test ending time. */
private long finishTime = -1L;
/**
* Constructor acceptes argument indicating number of times applicable
* test should be run
*
* @param newNumberOfExecutions Times test whose result this is will be/was
* executed.
* @param newTestType Type of test executed for this result
*/
public TestResult(final int newNumberOfExecutions, final TestType newTestType)
{
numberOfExecutions = newNumberOfExecutions;
testType = newTestType;
}
/**
* Start timer.
*/
public void startTimer()
{
startTime = System.currentTimeMillis();
}
/**
* Stop timer.
*
* @throws IllegalStateException Thrown if this stopTimer() method is
* called and the corresponding startTimer() method was never called or
* if the calculated finish time is earlier than the start time.
*/
public void stopTimer()
{
if (startTime < 0 )
{
throw new IllegalStateException(
"Cannot stop timer because it was never started!");
}
finishTime = System.currentTimeMillis();
if (finishTime < startTime)
{
throw new IllegalStateException(
"Cannot have a stop time [" + finishTime + "] that is less than "
+ "the start time [" + startTime + "]");
}
}
/**
* Provide the number of milliseconds spent in execution of test.
*
* @return Number of milliseconds spent in execution of test.
* @throws IllegalStateException Thrown if the time spent is invalid
* due to the finish time being less than (earlier than) the start time.
*/
public long getMillisecondsSpent()
{
if (finishTime < startTime)
{
throw new IllegalStateException(
"The time spent is invalid because the finish time ["
+ finishTime + " is later than the start time ["
+ startTime + "].");
}
return finishTime - startTime;
}
/**
* Provide the number of seconds spent in execution of test.
*
* @return Number of seconds spent in execution of test.
*/
public double getSecondsSpent()
{
return getMillisecondsSpent() / MILLISECONDS_PER_SECOND;
}
/**
* Provide the number of executions run as part of this test.
*
* @return Number of executions of this test.
*/
public int getNumberOfExecution()
{
return numberOfExecutions;
}
/**
* Provide the type of this test.
*
* @return Type of this test.
*/
public TestType getTestType()
{
return testType;
}
/**
* Provide String representation of me.
*
* @return My String representation.
*/
@Override
public String toString()
{
final StringBuilder builder = new StringBuilder();
builder.append("TEST RESULTS:").append(NEW_LINE);
builder.append("Type of Test: ").append(testType).append(NEW_LINE);
builder.append("Number of Executions: ").append(numberOfExecutions).append(NEW_LINE);
builder.append("Elapsed Time (milliseconds): ").append(getMillisecondsSpent()).append(NEW_LINE);
builder.append("\t\tStart: ").append(startTime);
builder.append(" ; Stop: ").append(finishTime);
return builder.toString();
}
}
/** Enum representing type of Test. */
private static enum TestType
{
SINGLE,
REDUNDANT
}
}
For the very simple code example used in the tests above, I needed to run the tests with many loops to see truly dramatic differences. However, the performance difference was obvious. I ran the tests several times for each test and averaged the results. In general, when the loops were large enough to differentiate significant differences, I found the method using the extraneous String instantiation to take roughly four times as long to execute as the loops using the String literal directly without the extra "new."
Although I ran each test on each number of loops, I show just one representative sample run for a few key data points in the following screen capture. I mark the results of running tests with 1 million loops in yellow and running with 10 million loops in red.
There are many cases in which the extra String instantiation demonstrated above might not have any significant performance impact. However, there is no positive of specifying an extra String instantiation and there is a negative in addition to reduced performance related to the extra code clutter.
Note that the examples above extend to similar String uses. Here is another slightly altered example.
// the way NOT to do it
String someString = new String("http://" + theDomain + ":" + thePort + "/servicecontext");
// better way to do this; don't need extra new instantiation
String someString = "http://" + theDomain + ":" + thePort + "/serviceContext";
// NOTE: If you start using loops to assemble long Strings similar to those
// shown above, performance needs will likely dictate use of StringBuilder
// or StringBuffer instead. See
// http://marxsoftware.blogspot.com/2008/05/string-stringbuffer-and-stringbuilder.html
// for additional details.
Finally, as a reminder for anyone new to Java and Java Strings, if you find yourself assembling a large String from a large number of pieces, you will typically be better off using a StringBuilder or StringBuffer instead of a String. The root cause for this again has to do with too many String instantiations.
The Java String's behavior can seem a little strange until one gets used to it and even then it still might seem a little strange. The main point to remember related to this blog posting is that String literals are full-fledged String objects and so do not require the String constructor to be explicitly invoked.
Additional Resources
Java Tutorial: Strings
String Constructor Considered Useless Turns Out to be Useful After All
Use of the String(String) Constructor in Java
What is the Purpose of the Expression "new String(...)" in Java?
Java String@Everything2.com
Subscribe to:
Posts (Atom)