I have a table that gets new rows for eg. I’d add more than 100 features like phone_number, username to the processed_donors to this table very soon.IS it possible to simplify this json_object(****) so It’s not an evergrowing list? Some way to use pandas+python, select * etc?
--- read_cur.execute(""" select a.donor_id, json_object('city', a.city, 'name', a.name, 'zip', a.zip, 'state', a.state, 'address', a.address), b.donor_id, json_object('city', b.city, 'name', b.name, 'zip', b.zip, 'state', b.state, 'address', b.address) from (select DISTINCT l.donor_id as east, r.donor_id as west from blocking_map as l INNER JOIN blocking_map as r using (block_key) where l.donor_id < r.donor_id) ids INNER JOIN processed_donors a on ids.east=a.donor_id INNER JOIN processed_donors b on ids.west=b.donor_id --- """)
Advertisement
Answer
I think you could do this in two steps.
First, you need to get all the columns of the processed_donors
table. Then you can use this result to build your query string.
read_cur.execute('DESCRIBE processed_donors') describe_result = read_cur.fetchall() column_names = [row[0] for row in describe_result] # column_names should be ['city', 'name', 'zip', 'state'] pairs = [f"'{col}', a.{col}" for col in column_names] jo_a = f'json_object({", ".join(pairs)})' # jo_a is json_object('city', a.city, 'name', a.name, 'zip', a.zip, 'state', a.state)
Note that I don’t have a mysql DB running here, so I can’t check if that’s the correct way how to build the column_names
, but I hope you get the idea. You can use the same process to build the json_object
for b
and then build a query string.
Please keep in mind that normally, creating a SQL query manually like this is a bad idea do to SQL injection vulnerabilities, but I presume there’s no user input involved here and you control the names of the columns, so it should be fine.