The problem is that we log things into a database. To keep disk space usage we can export files from the database that can be copied, or just remove the plane. Some power over me wants to see it as a JSON.
I see a single JSON files as a single object. Therefore, in this case we create an object with a list of log messages. The problem is that this file can contain several million log items, which I think most of the parser will strangle. So the only way to do this is to think that I have my JSON object for each log item.
This means that the JSON parser can not handle the file in that form but we can write a line parser to read in the file and push each row through the JSON parser.
Is that correct?
I believe that there is only one problem in the XML, but at least we have the keys ... or we can do this from our length as a group of prefixed minidoxs.
Thank you.
The full view of JSON is not exactly united with storing several millions of entries in a file. .
The entire state of JSON was to eliminate the overhead because of XML because if you write each record as a JSON object, you are back for storage of overhead bits which do not make any sense is. The next logical step is to write a regular CSV file with header record that how to import everything on the planet.
If, for some reason, you have child records, then you should regularly see EDI works
Comments
Post a Comment