How to resolve "No space left on device" errors
Hello,
In the last few days we have been encountering the "An unexpected error has occurred during Workflow execution: 9003: An unexpected error has occurred during Workflow execution. No space left on device" error. We have gone through our installation and removed a lot of datasets that we no longer need which should have freed up a lot of space but this error is still occurring.
I have looked at our virtual machine (which was also upgraded to double it's size last month) and have found the following directories to be the source of must of our storage volumes (104G in total):
My first question is whether these index files are what you would expect or if they look inflated? For reference our repository is 183M:
Looking in the large directories above there appears to be a large number of files (about 50 in full_sort_index, 75 in value_to_rows, and over 700 in group_index). Again, is this an expected amount or is there a problem with files not being remove/deleted in our set up?
Our tech set up is a Linux VM running 2.14.1.169 (we want to upgrade soon), with just under 10m lines of data in our main data set. We have roughly 4 copies of this stored (raw, cleansed, clustered, harmonised) plus a few additional data sets. The VM runs: Operating system Linux (ubuntu 22.04); Size Standard E16as v5 (16 vcpus, 128 GiB memory).
Many thanks,
Ben
Comments
-
To ensure you are aware before running into this issue you should set up an Automation to notify you if disk space falls below a certain percentage: Data Quality user documentation | Automate actions
Stop the server then remove anything in the temp directory:
You can go to Settings > Storage and reduce the number of days temp data is being retained.
You might be able to delete some historic repository backups:
Above will likely resolve the issue, but you can check the data you are loading/storing is being compressed by default, which can reduce size significantly. https://docs.experianaperture.io/data-quality/hosted-aperture-data-studio/data-studio-objects/datasets/#data-compression
1 -
Hi @BAmos, your repository.db is not particularly large. Being in bytes the number of digits can be misleading, but 183Mb is not big. That said, without knowing how you use Data Studio, I would guess that most of it is job history. You can see the number of days retention in the Edit Environment dialog. I would recommend setting it as low as you need. Same goes for audit retention period.
The repository.db is tidied and compressed (removing deleted records etc) at server startup so if you've not restarted the server for a while this might help.
I am happy to take a look at what might be causing the increased size, but I'd need the file. Please let me know. But I don't think it is anything to be worried about.
For the indexes - it largely depends on the data size and what you're doing with them. Josh's response contains pretty much everything you need to know on this, but happy to answer any other questions.
Best regards,
Ian
1