Making an Oracle SQL Database and PowerShell Play Together
As an SDET, my work often involves dealing with automated tests in different capacities. However, as I like to remind folks, the skillset of an SDET can extend beyond the standard expectations of knowing how to write Selenium tests. Other aspects of the role include fetching data reliably, test agent setup and teardown, Robotic Process Automation, DevOps, and pipelining.
In this case, I had a problem getting data out of an Oracle database and into a specific CSV format for a load test. If you’re in the SDET space and want to know more about Oracle SQL and PowerShell, this blog is for you!
Getting Started with Oracle SQL and PowerShell
For context, the original process I used for data fetching was manual. I would open SQL Developer, run a query, export it as a CSV, and then do a bit of cleanup in VS Code. It didn’t take long, but the process was repeated several times. The first three or four times, I did it by hand as it only took me about ten minutes (mostly connecting to the correct database and cleaning up the CSV, excluding query time). After that, however, I started to think about automation and writing a script to run the process while I performed other tasks.
A Simple Solution with PowerShell
My main requirement was to fetch data and maintain a specific format with minimal manual intervention. In my case, I had to populate data for a load test script. The data in question resided in an Oracle database (DB), and I needed to extract that into a CSV without any extra formatting, so the load test was not querying in real-time. Our soft security requirements included ensuring that a broad audience could execute the load tests but couldn’t modify the load test to do anything malicious with the DB.
- The customer I was with had an older version of SQL Plus. If you have version 12.2 or higher, an additional setting, “set markup CSV on,” can be used in the SQL script.
- The Import-CSV command at the bottom does two things: it adds headers in the format I wanted and removes the last line, which was an informational print-out from SQL Plus.
- The query below is fake. The actual query included multiple joins that could take several minutes to run for the dataset we needed.
Param ( [Parameter(Mandatory)] [string]$Sid [Parameter(Mandatory)] [string]$UserName ) # Location to save the data $CsvLocation = “C:\temp\load_test_data\accounts.csv” # Location to save the SQL command for SQL Plus to pick up $SqlLocation = “C:\temp\load_test_data\script.sql” "Set colsep , Set headsep off Set pagesize 0 Set trimspool on Set termout off spool $CsvLocation Select distinct(account_id) || ‘,’ || trim(full_address) || ‘,’ || zip_code From schema.accounts And rownum < 200000 Spool off Exit” | Out-File –FilePath $sqlLocation Write-Output "Beginning query. This could take up to 10min." Sqlplus $UserName/$( ConvertFrom-SecureString -SecureString (Read-Host -AsSecureString "Please input $Username password") -AsPlainText)@$($Sid) @SqlLocation $Csv = Import-Csv $CsvLocation –Header ACCOUNT, ADDRESS, ZIP | Select-Object -SkipLast 1 $Csv | Export-Csv $CsvLocation –NoTypeInformation Write-Output “Data exported to: $CsvLocation”
That’s it. Execute that PowerShell script, which outputs the query data to a CSV without additional formatting!
As with much other automation, the benefit here is in repetition. Ultimately, the script took me a little over an hour to research and write. I then spent another thirty minutes troubleshooting while generating a new dataset. I have already used the script several times and handed it off to another person with privileged account information, but not the database knowledge to run it.
The script saves time for the project’s duration and allows more people to handle this task without knowing the proper joins and formatting.
What other processes have you automated that help free up someone’s time for other tasks?
Curious about what it takes to work as an SDET? Read my guide to selecting the perfect SDET for your team and how to master the role.