I’m trying to write a simple query with an in clause like so: I need to be able to pass the values in the in clause as a parameter, the number of values in the in clause are variable and could be one or thousands depending on the user input. In other sql databases I have solved this problem by
Tag: amazon-web-services
How to successfully convert string to date type in AWS Athena?
I’m trying to convert a date column of string type to date type. I use the below query in AWS Athena: SELECT a, b, date_parse(date_start, ‘%m-%d-%Y’) AS date_start FROM “database”.”…
AWS Glue always send a ‘select * ….’ to the SQL Server , why and how to change that?
I’ve an aws Glue JDBC connection to a SQL server in a EC2 server. After crwaling the whole schema I created a job to query some table and used the activity monitor to check what is glue sending to the database, and the queries are just a select * into the whole table… The code that does that is below:
Does ALTER SCHEMA NAME affect permission grants to the schema in Redshift
If i update a schema which has been set up with a bunch of permissions to access other schemas with different access rights, will updating the name undo those grants or will they remain in place? Redshift lists the following on their docs as the syntax to run an alter schema, but does not write if the grants will be
MySQL Domo AWS RDS Connector
I’m having issues connecting Domo to a MySQL database hosted with AWS RDS. Whenever I try to authenticate I get this error: “Failed to authenticate. Verify the credentials and try again. Domo is ready, but the credentials you entered are invalid. Verify your account credentials and try again. Error setting up SQL connection. Could not create connection to database server.
Athena sql query to find items not containing a value
I have a table in a bucket, I am using Athena to get the required data My table looks like I need to find all the resources where A-1 is not found the result should give me i-2. How to write in sql Answer You can use aggregation to group all rows having the same resourceid together, and then filter
Snowflake to S3 with Header
Does anyone know of a way to export your data from Snowflake to an S3 file with a header? For example, I have this table: I want to export this data to a file that looks like this: … but I don’t see an option in the Snowflake documentation to do so. I tried a simple UNION ALL with the
SYNTAX_ERROR: ‘“LastName”’ must be an aggregate expression or appear in GROUP BY clause
I have a two tables, main_table & staging_table, main_table contains original data whereas staging_table contains the few of the updated records that I have to add into with main_table data, and …
PrestoDB/AWS Athena- Retrieve a large SELECT by chunks
I have to select more than 1.9 billion rows. I am trying to query a table hosted in a DB in AWS ATHENA console. The table is reading parquet files from the a S3 bucket. When I run this query: My query seems to time-Out as there are 1.9 billion rows that are returned when I run a COUNT on
Amazon Athena returning “mismatched input ‘partitioned’ expecting {, ‘with’}” error when creating partitions
I’d like to use this query to create a partitioned table in Amazon Athena: Unfortunately I don’t get the error message which tells me the following: line 3:2: mismatched input ‘partitioned’ expecting {, ‘with’} Answer The quotes around ‘PARQUET’ seemed to be causing a problem. Try this: