Poorly allocated memory: detail error returns like CAC0000018 - Failed to allocate memory for stored key
Not sufficient memory area in DB: Commonly SQL0000047 - Allocation failed along with ODBC error message in jde.log
Possible ways to debug:
Insufficient memory in logic server: Majority of issue may be coming from Unix platform in Unicode environment. Below will cover this topics
Poorly allocated memory: capture jdedebug.log (call object kernel log) and analyze it. Commonly this issue may reflect corrupted specification which can be restored by package process
Not sufficient memory area in DB: Refer installation guide and verify all the parameters are set properly
Q1. What are commonly reported UBEs that may produce the MEMORY ALLOCATION FAILURE issue?
Most commonly the issue is coming when the source data is huge or when a certain routine uses huge amount of cache to hold information before print out or commit it to a certain table. Mostly issue may come report:
UBE run periodically (weekly, monthly or yearly)
UBE has to read huge transactional files and stored it in cache (not work/temp file)
External data to interface standard table with huge amount out inbound data
Each memory space is sized exactly based on the data in tables. For example, a tree node is needed for each record for each index in each table. Plus an extra NIL node is needed for each table to indicate tree leaves for the table indices. So we multiply the number of records by the number of indices plus 1 for each table and add them all up to create the exact number of tree nodes needed to load the cache. This means if somebody adds a record to a table and reloads the cache we do not have enough memory to hold all records. In this situation, we create an extra memory space for the needed type whenever we run out of memory. However, if records are deleted, we do not get rid of memory and shrink the size of the cache.
Note: A red-black tree is a type of self-balancing binary search tree, a data structure used in computing science, typically used to implement associative arrays.
This calculation assumes the keys and indices to a certain table is left unchanged. If keys or indices are changed, the cache should still work, however the following calculation will be off. Approximately 193K will be used by the cache that is non-data related. Plus, the size of memory will be equivalent to the size of data in a certain table. In Unicode and 2 bytes Asian language, it requires two bytes per character. That is why, the usage of memory in Unicode environment will be higher than single byte environment.
Computation of size in byte:
Data Type
Size in DD
Actual Size in Byte
Character
1
2
String
The size of DD + 1 (='\0')
Size in DD x 2
Math Numeric
Various
49
Integer
11
4
Pointer (GENLNG)
11
4
Date
6
6
JDEUTIME
11
16
For example, F4101 (Item Master) contains 10,000 rows the memory required will be:
Q5. What is the best way to work around this type of error?
This has to be approached from the application point of view or your business logic to cope with technical limitation.
Purge data you do not want to handle
Caching data repeatedly requested from a certain routine
Handle table cache with care (if the size of source table is too huge do not cache it)
Split a single batch into many by narrowing runtime data selection
Adjust physical memory size if applicable
Notes:
Most of case it is constraint of your server and hardware so try to workaround in getting this type of error
For individual UBE look for best practice
For instance, if parallel processing is applicable, which may utilize less physical memory because a single process is divided into multiple processes (let alone it gives you better performance)
Q7. Is there way not to dump memory diagnostic in dmp file when jdemem code log Memory Allocation Failure?
From Tools Release 9.1.2.0 onwards "DmpMemDiagForMAF" under [DEBUG] section in enterprise server JDE.INI which will disable out of memory flag when set to 0 and will not dump memory diagnostic in dmp file when jdemem code log Memory Allocation Failure. This parameter has been implemented through <Bug 13715335> : NO CONFIGURATION SETTING TO DISABLE DMP FILE CREATION FOR OUT OF MEMORY.