Nothing to do for MySQL, Microsoft SQL Server and Oracle
Only PostgreSQL may require an action here, if you reach 2 billion records.
Requirement Yogi 2.6.5 for Confluence and above.
Prior to version 2.6.5, all our database records were identified by IDs which were limited to 2 billion rows.
In version 2.6.5, we've performed the first step of "upgrading" those IDs to the type "long", which supports an almost-infinite number of rows, for the table AO_32F7CE_AOINTEGRATION_QUEUE, which contains the messages sent to Jira.
In future versions, we will change the rest of the tables.
As a database administrator, when should I act?
- At the latest, you can wait for users to raise the issue. Unfortunately, it means pages will be saved but requirements won't be updated from the moment users meet the issue.
- If you want to act early, we suggest you act when the ID "2 billion" is generated in the table AO_32F7CE_AOINTEGRATION_QUEUE, since the limit is ~2.147 billion. You can check this using the tab "Usage metrics" in the Requirement Yogi administration.
- It's just a sequence where you need to increase its max.
Expected error on PostgreSQL
Only for PostgreSQL, when you reach 2 billion records in the queue, PostgreSQL will refuse to insert new messages.
Users may come to you with an error such as "The requirements couldn't be reindexed for Requirement Yogi. (...) addToQueue()":
Or you may see, in the logs, an error such as:
Then records are not saved anymore. You must update the sequence IDs.
How to fix in Postgres
The type of the column is already "big integer" and doesn't need to be changed. It's only the sequence which needs to have its max increased:
That is all
Only the table AO_32F7CE_AOINTEGRATION_QUEUE has this issue, from 2.6.5 on, and for PostgreSQL only.