High volume trading organization with on premises databases.
The Problem: The firm uses a combination of custom reporting and Business Intelligence tools for a summary view of their trading activity. However, sometimes, their regulatory compliance and fraud teams need access to underlying data to dig into activity flagged by their systems.
Before: In the past, these teams would need to request custom queries be run by their data team. Often, the queries would return data sets with more than a million records. They would then need to iterate with the data team, through trial and error, to modify the query so that the result was a few thousand records.
With Exponam: Now, when a team needs to review data underlying one of their summary dashboards or KPI, they simply hit, “Extract Data” on their console. The data is sent directly to the .BIG Builder and downloaded to their machine. Even if there are 100s of millions of records, the data opens immediately in the .BIG Explorer. Without creating SQL queries, the user is able to sort and filter the information till they identify what they needed.
Once the client has received the file, they simply open it in the .BIG Explorer. The client is then able to filter the file into the different components they need.
Data Distribution Company
The Problem: A political marketing firm regularly distributes voting and polling lists to its clients. These lists can be several million records in size. The firm’s clients don’t have an easy way to review, sort and break-down the data.
Before: When a client would purchase a list from the firm, the firm would need to break the list into manageable .csv chunks of 100,000 records each. Although the firm would try to break the files into logical parts; often, several files would be need to be stitched back together to form a single set.
The receiving firm would need to load these .csvs into a database of their own – MS Access, MS SQL, Oracle, etc. – in order to review the contents of the files.
With Exponam: With Exponam, the entire process is streamlined. The firm simply creates a single .big file containing all the records to be delivered to the client. This file can be thousands, millions, or even hundreds of millions of records in length. The file is sufficiently compressed so that it may be emailed to the client as a simple attachment. The file is easily opened and manipulated by the client. The client no longer needs to load into a database or fumble with multiple files. Furthermore, the file can be secured so that only the specific client who acquired the data may open it.
Cloud-based Data Integration and Cleansing Firm
The Problem: This technology startup compiles data from numerous sources, standardizes, and cleanses it for analysis. The primary analysis artifacts are a series of report which summarize and aggregate the data. One of the primary functions available to users is to download an extract of the underlying data sets.
Before: When a user requests an export of the data, a .csv is generated and downloaded to the user’s machine. Extract capability was limited to a single data source at a time, in an attempt to keep the data sets under a hundred thousand records each. When data sets exceeded 500,000 records, only the first 500,000 records would be sent. This was for several reasons including: 1) to limit costs associated with extracting large data sets from the cloud provider; 2) to keep the download from being too large to be handled by the client.
With Exponam: Users are now able to request data exports of the entire data set (not just single source at a time). When a user requests a data export, the data is compressed in a .big file. The user selects a private encryption key. The data is downloaded to the user’s machine, in its entirety and fully encrypted – keeping their highly sensitive data secure. The user may then open the file with the .BIG Explorer. Using the filter and sort functions, the user identifies abnormal records and can purge these records from their analysis sets.