BIRT DiskCache error with custom objects
Posted 07 June 2012 - 02:06 PM
I have a report that is structured using a script dataset as its data source. In addition to the standard data primitives (string, integer, boolean, etc), there are fields in the dataset holding java objects like hashtables and other custom classes. This works great for the full report, UNTIL the report size gets big enough that BIRT needs to cache it to disk during generation. When that happens, the report fails with an NPE in org.eclipse.birt.data.engine.olap.data.util.ObjectWriter. The root cause is that BIRT does not recognize any of the non-primitive data types and fails.
Does anyone have any guidance on dealing with these custom data types appropriately? Are there certain interfaces that must be implemented to allow BIRT to cache them?
This seems like a common situation, so I am hoping that some of you can provide some wisdom on how to handle it. Thanks in advance for your help!
Posted 11 June 2012 - 06:18 AM
Posted 12 June 2012 - 04:43 AM
case DataType.BOOLEAN_TYPE :
case DataType.INTEGER_TYPE :
case DataType.DOUBLE_TYPE :
case DataType.STRING_TYPE :
case DataType.DATE_TYPE :
case DataType.BLOB_TYPE :
case DataType.BIGDECIMAL_TYPE :
case DataType.SQL_DATE_TYPE :
case DataType.SQL_TIME_TYPE :
When anything outside of these datatypes is passed in (in particular the data type unknown, -1, value), a null is returned for the writer, which then triggers the NPE when this statement is called in ObjectWriter.write:
writer.write( file, obj );
I don't see how this part of the disk caching logic could ever work with data types outside the list shown above. This is all coming from the StructureDiskArray class when it persists the cachedobjects.
I'm hoping there is something obvious I am missing...is this a limitation in BIRT, or am I missing something really obvious? Any idea?