PxPlus User Forum

Twitter Twitter Twitter

Recent Posts

Pages: [1] 2 3 ... 10
Programming / Re: [odb] / [mysql] escape characters
« Last post by Mike King on Today at 09:55:02 AM »
The logic for an INSERT or UPDATE SQL statement in the mysql interface will double up both the apostrophe and the backslash so it should work for you.
Programming / [odb] / [mysql] escape characters
« Last post by bteixeira on August 13, 2020, 04:21:49 PM »
We have code writing to a database using the [odb] tag.  It works well generally, however at one client, they are using a MySQL server.  MySQL needs backslashes to be escaped (especially at the end of a field where it would end up escaping the quote delimiters).  If using the [odb] tag, a backslash in data is not getting escaped.  Would using the [mysql] tag instead resolve this?  Is there an option or setting that would let us use the [odb] tag still but correctly escape the backslashes?
Language / Re: 'XK'
« Last post by bbssupport on August 12, 2020, 05:43:57 PM »
Hi Mike,

Thanks for your reply. I makes sense that depending on the file and data structure, that results would vary.

Can I enable the XK parameter and issue a KEYED LOAD on each file to convert them to the enhanced VLR format, or do i need to recreate the file from scratch? Is there a way to detemine if a file is an enhanced VLR vs a standard VLR so that i can confirm the conversions have occurred?
Language / Re: 'XK'
« Last post by Mike King on August 12, 2020, 09:33:06 AM »
In order to see any improvement with the XK parameter the key table need to be reloaded. As for what to expect in terms of improvement depends a lot on your file structure.

The major gain seen when file using XK is that the key blocks, the pages of the file used to maintain the keys, will be able to hold more than 255 keys.  For example if you have a file with a 8 character key, each key entry in the key block becomes 13 bytes long so the maximum amount of space in a key block used will be 13 * 255 = 3315 regardless of the block size.  So if you are using the default of a 4K block or about 20% of each block cannot be used.  Enabling XK allows the system to put more keys in a key block (~ 64,000) resulting in smaller file sizes and faster key access due to a shorter key tree.

Now if your keys are such that 255 keys will always fill a key block this will make little difference, however if your file has say a 16K page size, any key less than 60 bytes would result in wasted space.

Also, the XK parameter allows the system to create VLR files beyond 2GB -- so if you need larger files you may want this parameter enabled.
Language / 'XK'
« Last post by bbssupport on August 12, 2020, 01:17:20 AM »
Hi PxPlusers!

I've been playing around with the XK parameter on a few of our data files but i'm not seeing much of a reduction in file size or increase in performance. Running the test programs from the users guide gives me the results described, so its just possible that the files i'm converting don't benefit much from the enhanced VLR format. I would like to do away with segmenting however, so I would like to convert all our files across to the newer format.

Can someone advise
1) is there a way to see (eg using FID/FIN etc) if a file is an enhanced VLR file (ie was created with the XK parameter turned on).
2) do i have to completely recreate each file with the XK parameter set, including the data dictionary, copy the data across, and then replace the original file with the new file? I read somewhere that you can issue a 'keyed load' to convert an enhanced VLR file back to a standard VLR so that it can be read by older PVX/PXPLUS versions.... does the same apply the other way?

Thanks :)
Sorry, I think I misunderstood your question. I thought you were asking why a read from a database took longer then a write to a database.

What I think you are actually asking is why is it slower to do a READ directive on an database vs executing a SQL read directly via the WRITE RECORD directive?

If that is the question you are asking then the answer is when you do the WRITE RECORD you are using whatever SQL statement is in SQL$ and this SQL can be simple and direct making it fast. When you use the READ directive you are asking PxPlus to write the SQL for you and it has to make it's best guess at what is the fastest SQL to do that. How good a job it does depends on how much information it has about the tables fields and its keys. Also it depends on if the data given in the KEY= , if the data is only a partial match it may have to do more work.

You can turn on the '!Q' parameter to take a look at what SQL PxPlus is sending the DB. Comparing the READ to the WRITE RECORD will probably tell you why it is slower. To make the READ faster as mentioned above give PxPlus as much info about the table as possible using REC= and KEY= in the open, and make sure the KEY= is the correct length for the full key.
Programming / Re: Dummy Select to SQL table takes long to run.
« Last post by mok on August 11, 2020, 04:20:14 PM »
Programming / Re: Performance difference between READ and WRITE RECORD to ODBC channel
« Last post by mok on August 11, 2020, 04:18:11 PM »
So, is it correct to imply that if I want to see performance gain, I should change all my READ (IBIS_DB'ODB, KEY="!" to WRITE RECORD (IBIS_DB'ODB)
The read has to find the data at the key and then download the data to your system. The write just uploads the data to the DB which then puts it in the DB. The read you have to wait for the search while the write you don't have to wait for any search.
Programming / Re: Dummy Select to SQL table takes long to run.
« Last post by Devon Austen on August 11, 2020, 11:14:35 AM »
The only way to possibly improve performance without modifying any program would be to try a newer version of PxPlus which has improved optimization logic and see if that improves performance.

Other then that you would need to modify programs so that PxPlus knows the key definition of the table. Use KEY=KeyFld1,KeyFld2:D,KeyFld3 in the OPT= on the open. Also what I mentioned before about matching the key size and/or using WHERE in the select.
Pages: [1] 2 3 ... 10