Use case
As of now, numeric conversions like BigDecimal to Integer use BigDecimal.intValue(), which is a narrowing conversions, discard fractional part and may overflow. My applications, and probably from many others, would benefit from a more strict behavior, like that provided by BigDecimal.intValueExact().
That should be applicable for all potentially lossy conversions (double-to-int, long-to-int, etc), potentially even for cases like converting a timestamp with nanosecond precision into a datatype format that only goes up to millisecond.
I think an ideal solution would be a new flag for configuring behavior, like mapstruct.precisionLossStrategy with values like IGNORE and ERROR, with IGNORE as default to ensure backwards compatibility for existing code.
Example code with current behavior:
@Mapper
public interface MyMapper {
MyMapper INSTANCE = Mappers.getMapper(MyMapper.class);
IntegerRecord map(BigDecimalRecord value);
BigDecimalRecord map(IntegerRecord value);
record BigDecimalRecord(BigDecimal number) {}
record IntegerRecord(Integer number) {}
static void main(String[] args) {
System.out.println(INSTANCE.map(new BigDecimalRecord(new BigDecimal("1.9"))));
}
}
Output:
Desired output:
java.lang.ArithmeticException: Rounding necessary
Generated Code
Currently generated code:
@Override
public MyMapper.IntegerRecord map(MyMapper.BigDecimalRecord bigDecimalRecord) {
// ...
if ( bigDecimalRecord.number() != null ) {
number = bigDecimalRecord.number().intValue();
}
// ...
}
Desired generated code:
@Override
public MyMapper.IntegerRecord map(MyMapper.BigDecimalRecord bigDecimalRecord) {
// ...
if ( bigDecimalRecord.number() != null ) {
number = bigDecimalRecord.number().intValueExact();
}
// ...
}
Possible workarounds
The current alternatives I see now:
- using
qualifiedByName with a custom method to ensure lossless conversion
- but that easily gets overwhelming if dealing with lots of numeric field mappings
- moreover, one need to manually apply it to every field pair, and that makes missing some a very likely scenario
- using
typeConversionPolicy=ERROR and providing manual conversion methods for all required conversions
- less error prone, but still requires setting the flag everywhere or importing a shared config everywhere (mistake opportunities), plus all the boilerplate code that could be provided by MapStruct
If there is a better way to achieve the described behavior, I'd be glad to know.
MapStruct Version
1.6.3
Use case
As of now, numeric conversions like
BigDecimaltoIntegeruseBigDecimal.intValue(), which is a narrowing conversions, discard fractional part and may overflow. My applications, and probably from many others, would benefit from a more strict behavior, like that provided byBigDecimal.intValueExact().That should be applicable for all potentially lossy conversions (double-to-int, long-to-int, etc), potentially even for cases like converting a timestamp with nanosecond precision into a datatype format that only goes up to millisecond.
I think an ideal solution would be a new flag for configuring behavior, like
mapstruct.precisionLossStrategywith values likeIGNOREandERROR, withIGNOREas default to ensure backwards compatibility for existing code.Example code with current behavior:
Output:
Desired output:
Generated Code
Currently generated code:
Desired generated code:
Possible workarounds
The current alternatives I see now:
qualifiedByNamewith a custom method to ensure lossless conversiontypeConversionPolicy=ERRORand providing manual conversion methods for all required conversionsIf there is a better way to achieve the described behavior, I'd be glad to know.
MapStruct Version
1.6.3