The place for most Aperture Data Studio discussion
The Data Quality Community is a great place to collaborate, seek answers, and learn Aperture Data Studio. Register for free to join in with discussions.
Hi, We successfully triggered a data refresh on our dataset. However, we now need to stop or cancel the refresh in progress, but we’re unable to locate an option to do so. Could someone please guide us on how to cancel an ongoing refresh? Thanks in advance for your help!
Hi, I want to get a route to live process in place whereby only things which have been peer reviewed and signed off are promoted to live, and any other work just stays in the test environment. We already have 2 environments set up (default and test). The test environment isn't really used, so would be an ideal blank canvas…
Hi, We are trying to pull a dataset into Power BI and limit to 1000 rows whilst we iterate to make refreshing snappier. However, we're getting an Error 400 that $top isn't supported, though the documentation seems to suggest otherwise. Should this work? It feels like the way Power BI structures the query, perhaps its the…
Hey community, I have just updated to the new version of ADS, also updated the address validation, copy all the right files to the right folders but there is still an error on the Address Validation. Can anyone help with this?
Hi Team, We are anticipating changes in our datastore architecture, which will significantly impact datasets and views currently sourced from HD Insights. These sources will need to be repointed to Databricks as part of the transition. Could you please advise on the possible solutions or best practices for handling this…
Hi All, I’d like to ask whether Aperture is capable of directly connecting to OpenText Content Server, without requiring any middleware or intermediate systems (such as Talend or external scripts). Is there a native connector or export integration available for pushing data directly from Aperture into Content Server?…
I am working on the input interfaces that PCC manages, which has the following nomenclature ProcessingDate|Row-count Here is an example: 20250603|5762202 1416|81178235||Vendor|| 6700|81380685||Sold-From Party|| but I can't get the columns to join correctly. as a result I have this 20250603|5762202||||||…
Hi All, I would like to fire an event (run a specific workflow) if NO rows in the input - but there is only an option to fire events if input has rows: Basically the goal is to only send correct data to an external system (Snowflake) if there are no errors (at all) in a dataset. Is there a way to set this? Many thanks!…
Hi All, is there a way to use the second row (instead of the first) as the column names when uploading dataset? Or is there a workaround? There is a dataset that we would like to drop into the DropZone but the header is always in the second row… Many thanks! Vera
Hi, I have seen previous posts on partitioning, and use that technique regularly. However, my need this time is slightly different. I have groupings of duplicates, but I would like to count incrementally when I come across a new identifier. I actually have this working now, but it took some Aperture kung-fu, so wondering…