I have a dtsx package running in SSIS on our SQL server (2016 SP2 CU6), its a lightweight task of importing an Excel (xlsx) worksheet into a table in SQL as-is, no transformations needed at this stage, there is an agent job that does a lot more around it but I can see the fault is occurring in just this section.
The issue I am seeing is that a number in Excel (0.0153
) comes out in SQL as if converted badly (1.5299999999999999E-2
).
To confirm the number in Excel is 0.0153
and not something longer or formatted to look like this.
After the import this is what I have: –
I find this odd as: –
- They are both in a very similar situation but with different outcomes
- They are one after the other in the Excel worksheet
- They are near the top of the worksheet (rows 10 and 11) so sampling should be using them to understand the data type if there is anything like that happening (row 1 is
0.193841
which is also similar, overall range is -29 to 200 with up to 7 decimal places) - There is only 0.0021 difference between them
The number format in Excel is General
and the column in SQL is NVARCHAR(255)
In the SSIS package the Excel source DataType for the External Column is being classed as double-precision float [DT-R8]
(not sure where that comes from but I have no ability to change it) and the Output Column is Unicode string [DT-WSTR]
I am then using Native OLE DBSQL Server Native Client 11.0
OLD DB Connection to get to the table.
Its a simple package but why is this value jumping so wildly. Its not a one off, the input has around 1700 rows and we see this on a number of rows.
Advertisement
Answer
In a corporate environment we can’t always choose the source and destination data, and sometimes we just have to “make it work”. Excel is a nightmare in SSIS especially, all it takes is one wrong change to the input file and your job fails because metadata doesn’t match anymore.
Yes, ideally this is numeric data and should be stored in a numeric data field (not a string/character field).
This is how I’ve accomplished this scenario in the past, hopefully it will work for your situation, but really depends on what the resulting data will be used for.
- Add a Data Conversion step in your data flow task to convert the double-precision float value to a decimal (I used scale=10)
2. Add a second Data Conversion step to convert the decimal value back to a Unicode string
3. Change your OLE DB destination mapping to use the new output column from the second Data Conversion task.
Result: