Apache Spark SQL connector is used to connect Databricks into Spotfire.
Connecting to Databricks catalog and databases works as designed, but querying the data results in an error (see Error below).
Researching online suggests the solution is to set the parameter EnableQueryResultDownload="0" in driver config (see Reference below); however there's no option to do this via Spotfire connector pane (see attached .png).
Resolution
The issue is resolved by using Tibco's Custom Connector for Databricks (https://community.spotfire.com/files/file/83-custom-connector-for-spotfire%C2%AE-to-connect-to-azure-databricks/).
The Custom Connector looks almost identical to the Apache Spark SQL, with slight UI changes that: (1) allow SSO and (2) enable Proxy and Advanced options for ODBC connection.
We believe the Custom Connector included bug fixes that improve its usability, and it should replace the inferior Apache Spark SQL connector as part of base install.
ERROR;2024-02-22T18:26:52,595-05:00;2024-02-22 23:26:52,595;b136262a-b45b-4923-8061-0caec1f25f46;0419105fb8ufMN;277;svc_psfs03@evercore.local;TsasUserSession;Spotfire.Dxp.Worker.Services.Web.Automation.JobInstance;"Automation job b7bc12ec-dc20-400e-97a5-4f74e63787ad failed in task 1 Open Analysis from Library: Spotfire.Dxp.Framework.Library.LibraryException: Import failed ---> Spotfire.Dxp.Data.Exceptions.ImportException: An error occurred when executing a query in the external data source.
External error:
ERROR [HY000] [Simba][Hardy] (35) Error from server: error code: '0' error message: '[Simba][Hardy] (134) File df2ca4b4-fc10-4eab-adf3-aca527a9c2bb: A retriable error occurred while attempting to download a result file from the cloud store but the retry limit had been exceeded. Error Detail: File df2ca4b4-fc10-4eab-adf3-aca527a9c2bb: The result file URL had expired on 1708645311447 (Unix timestamp)'.
Setting [Simba Config](https://community.fabric.microsoft.com/t5/Service/Error-on-PBI-Service-dataset-refresh/m-p/2633552).