Two good Java questions courtesy of a good friend:
#1.
java.util.Set<Short> set = new java.util.Set<Short>();
for ( short i = 0; i <= 100; i++ ) {
set.add( i );
set.remove( i - 1 );
}
System.out.println( set.size() );
What is the output?
The answer is 100. The reason is that
i - 1
gets cast to a java.lang.Integer
, and all the calls to the created Short
s' .equals()
methods will return false when passed these Integer
s, so nothing gets removed from the Set
. Fun!#2.
Number n = args.length == 0 ? new Integer( 3 ) : new Float( 2.0f );
System.out.println( n );
What is the output?
This one bothers me even more. The answer is 3.0. Java 5+ decided that the entire ternary, despite that it only needs to return a
java.lang.Number
for the assignment, should actually return a java.lang.Float
since it could return an Integer
or Float
and you can shove an Integer
into a Float
without losing precision but not vice-versa. This is documented but ... yick! gross!Update: I'm told proper attribution for these examples goes to Neal Gafter and William Pugh. Also, my statement that "you can shove an
Integer
into a Float
without losing precision" isn't accurate. Some of the float
's bits go to the mantissa (exponent), so you can technically lose some precision. Going the other way (float
-> int
) you always lose precision though.
1 comment:
Technically, converting float to int does not always lose precision, if the floating point number is exactly representable as a 32-bit signed integer (e.g. 3.0f converts to 3 without loss of precision).
What can be also slightly surprising is that Java has implicit conversion from int to float which is lossy (e.g. float f = 268435455; yields 268435456.0f).
Post a Comment