Run Spark Job Definitions in Pipelines with Service Principal or Workspace Identity
Microsoft Fabric Blog details how to securely orchestrate Spark job definitions in Data Factory pipelines using Service Principal or Workspace Identity authentication, providing developers with best practices for automation and enterprise security.
Run Spark Job Definitions in Pipelines with Service Principal or Workspace Identity
The Spark job definition activity in Microsoft Fabric Data Factory pipelines now supports the connection property, enabling a more secure and production-ready method to run Spark Job Definitions (SJD).
What’s New?
With this update, you can configure notebook activities to run as:
- Service Principal (SPN) authentication
- Workspace Identity (WI) authentication
These methods are recommended for production environments, ensuring:
- Operational reliability: Minimizing issues from user credential changes or deactivation.
- Enterprise-grade security: Utilizing service-based authentication for better risk management and compliance.
- Consistent automation: Allowing pipelines to run smoothly without manual intervention.

Why it Matters
Previously, Data Factory pipelines often relied on user authentication—causing potential disruptions due to employee departures or expired tokens. By adopting the connection property with SPN and WI, users benefit from:
- Scalable orchestration for complex notebook-based workflows
- Improved governance through centralized identity management
- Future-proof automation for critical production workloads
How to Get Started
- Add a Spark job definition activity in your Data Factory pipeline.
- In the Connection section, choose to configure a new connection or select an existing one.
- Specify credentials or identity configurations, including SPN and WI.
- Run your pipeline to experience secure and automated job execution.
For further details, refer to the Spark Job Definition activity documentation.
This post appeared first on “Microsoft Fabric Blog”. Read the entire article here