double i = o * 100000.0;
The value of i
after this is 229999.99999999997
(you can see this yourself by adding a simple print statement; it's due to floating-point inaccuracies).
int i = (int) (o * 100000.0);
The value of i
after this is 229999
(i.e. the value above with the fractional part truncated due to the int
cast).
Therefore, in the line
double d = i / 100000.0;
you are using two numerically different i
values in your two snippets (i.e. they differ by about 1), hence the different outputs.
You are correct in saying that when dividing an int by a double, the int is first converted to a double (this is done through the bytecode instruction i2d
).