Home / Technology Policy & Ethics / December 2019 / Towards Zero-Trust Database Security – Part 2

Towards Zero-Trust Database Security – Part 2

By Walid Rjaibi,  Department of Computing and Mathematics, Manchester Metropolitan University and the IBM Canada Lab, and Mohammad Hammoudeh, Department of Computing and Mathematics, Manchester Metropolitan University 

December 2019

Read Part 1.

1. Introduction

In Part One of this article, we have explored the direct and indirect means through which the same data in a database system can be accessed and the challenges they pose to adhering to the basic tenets of zero-trust security. Here, we outline the solutions that are most suitable to address these challenges and enable enterprises to implement zero-trust database security without negatively impacting core database tenets such as query performance.

2. Separation of duties

Traditionally, database systems have been designed such that the Database Administrator (DBA) manages all aspects of the database, including security and auditing. Additionally, the DBA inherently had full access to all tables in the database. With the emergence of insider threats as a security concern equally important to external threats [1], this traditional model clearly hampers an organization’s ability to fully implement zero-trust database security.

We contend that database systems must provide the capability to allow organizations to vest security administration and database administration into two non-overlapping roles so separation of duties can be enforced. Separation of duties also ensures that the DBA does not have implicit access to all the data in the database. This separation of duties enables organizations to better adhere to zero-trust security. It may also dictate the type of database system to adopt as not all database systems necessarily provide the capabilities to enforce separation of duties.

3. Data encryption

Indirect access is most dangerous as it completely bypasses all access control and auditing in the database system. A powerful countermeasure to protect against indirect access is data encryption as encrypted data is of no value to an attacker. However, data encryption for database systems comes in many forms and not all forms of encryption address the indirect access threats outlined. There are also performance implications that need to be taken into account when selecting a database encryption solution.

Fig. 1 contrasts the key database encryption options. Self-Encrypting Disks and file system encryption provide the broadest coverage (they encrypt entire disks or file systems), but they only protect against indirect access to storage media. Tablespace encryption, full database encryption and column encryption protect against indirect access to both storage media and file systems. Column encryption, however, is intrusive to applications and negatively affects performance. Tablespace encryption may create a vulnerability when a DBA inadvertently moves data from an encrypted tablespace to an unencrypted one, or when data is held in temporary tablespaces. Therefore, full database encryption allows organizations to implement zero-trust security without having to compromise either on the database side or on the security side. The design of one such solution is discussed in detail in [2]. To give an example, consider a classical 3-tier banking application which stores client data in its backend database. To protect this data against indirect access, the application would enable full database encryption for its backend database. Using the solution discussed in [2], this can be achieved using SQL as follows:


Figure 1. Database encryption options.

4. Fine-grained access control

Fine-Grained Access Control (FGAC) refers to the ability to control access to database tables at the row level, column level, or cell level. This level of granularity ensures users are granted only the privileges they need and is paramount for mitigating the direct access scenarios outlined in Part One. However, database FGAC comes in many forms and not all forms adequately address the direct access threats. There are also usability implications that need to be taken into account when selecting a database FGAC solution.

Fig. 2 contrasts the database FGAC options. Database views [3] and application-based FGAC provide most flexibility in terms of expressing FGAC rules, but the security they provide is not data-centric and can be bypassed. Label-Based Access Control (LBAC) [5] is a data-centric security model where the security policy is always enforced regardless of whether the table is accessed directly or indirectly through a view. However, LBAC lacks in flexibility when it comes to expressing security rules outside of the rigid No Read-Up and No Write-Down rules of Multilevel Security (MLS) [6]. Row permissions and column masks [4] combine the benefits of views and LABC. They are very suitable to implementing zero-trust security. To give an example, consider our banking application again. Suppose that client data is stored in a table called CLIENT. Further, suppose that the bank’s security policy is such that only members of the TELLER role can see the full account number in table CLIENT. Anyone else can only see the last 4 digits. Using the solution discussed in [4], this can be achieved using SQL as shown below. The mask construct created is automatically evaluated by the database system each time the account number column is accessed and ensures the bank’s security policy is enforced.

  THEN account

Figure 2. Database FGAC options.

5. User identity propagation in multitiered environments

In multitiered database environments, the application interacts with the database system using a generic user ID. This model does not contribute to implementing zero-trust security because the database system does not see the actual end user identities. One major implication of this is diminished user accountability as the database audit log will only show a generic user ID with no references to the actual end users behind the application.

Some database systems provide the notion of Application Context to give applications the tools to propagate the end user identity to the database system where it can be used for auditing purposes [7]. In other solutions such as the Trusted Context concept introduced in [4], a more formal mechanism is used to allow the establishment of a trust relationship between the database system and the application and for the propagation of end user identities to the database system in a controlled and secure manner.

Strategies for implementing zero-trust database security must consider multitiered database environments to ensure that user accountability is maintained. This may in turn dictate the type of database system to adopt as not all database systems necessarily provide the capabilities to enable applications to propagate end user identities. To give an example, let’s continue with our banking application. To ensure that the actual end user identities are propagated to the database, the application can leverage the trusted context concept introduced in [4]. This requires the following steps:

  1. The administrator creates a trusted context object to define a trust relationship between the application and its backend database.
  2. The application establishes a trusted connection with its backend database.
  3. Before issuing any request to the database on behalf of an end user, the application switches the current user of the connection to the new user. This automatically propagates the end user identity to the database where it is used for all access control and auditing till the application switches user again.

6. Conditional authorization

Traditional database authorization does not provide control around when a particular privilege can be exercised. One major use case where this model falls short is application bypass. An application administrator may choose to abuse the application credentials by accessing the database outside the scope of the application.

Some database systems provide a capability to allow organizations to require the database system to verify more attributes before allowing a user to exercise their privileges. For example, the Trusted Context concept introduced in [4] addresses application bypass by requiring the database system to authorize the application user ID only when additional attributes have been verified. Therefore, an application administrator who wishes to abuse the application credentials by accessing the database outside the scope of the application will find it hard to do so.

Strategies for implementing zero-trust database security must consider enforcing conditional authorization to protect against privilege abuse scenarios. This may also influence the choice of the database system to adopt as not all database systems necessarily support conditional authorization.

7. Conclusions

Databases contain enterprises most critical data and are the subject of attacks by both insiders and outsiders. Implementing zero-trust database security is therefore paramount to protect critical data. While user authentication, Transport Layer Security and auditing are standard practices and are usually implemented adequately by most organizations, the indirect and direct threats outlined in this paper require careful thinking including the choice of the database system to adopt. Table 1 summarizes the indirect and direct threats we outlined together with the security best practices to address them and enable adherence to zero-trust security.

Table 1. Implementing Zero-trust Database Security


  1. Verizon, https://www.knowbe4.com/hubfs/rp_DBIR_2017_Report_execsummary_en_xg.pdf, 2017.
  2. Rjaibi, “Holistic Database Encryption”, Proc. International Conference on Security and Cryptography, 2018.
  3. Elmasri, S. Navathe, Fundamentals of Database Systems 6th. Addison-Wesley, 2010.
  4. Rjaibi, M. Hammoudeh, ” Fine-Grained Database Authorization and User Identity Propagation in Multitiered Environments”, IEEE Trans. On Knowledge and Data Engineering, submitted for publication (Pending evaluation), 2019.
  5. Rjaibi, P. Bird, “A Multi-Purpose Implementation of Mandatory Access Control in Relational Database Management Systems”, Proc. International Conference on Very Large Data Bases, 2004.
  6. Rjaibi, “An introduction to multilevel secure relational database management systems”, Proc. The conference of the Centre for Advanced Studies on Collaborative research (CASCON), 2004.
  7. Oracle, “Defense-in-Depth Database Security for On-Premises and Cloud Databases, https://www.oracle.com/technetwork/database/security/security-compliance-wp-12c-1896112.pdf, 2019.


Walid Rjaibi is Distinguished Engineer and Chief Technology Officer (CTO) for Data Security with IBM in Toronto, Canada. Prior to his current role, Walid was Research Staff Member in network security and cryptography with IBM Research in Zurich, Switzerland. Walid’s work on Data Security has resulted 26 granted patents and several publications in journals and conference proceedings such as the IDUG solutions journal, the internation conference on security and cryptography (SECRYPT), the internation conference on data engineering (ICDE), and the internation conference on Very Large Databases (VLDB).

Mohammad Hammoudeh is the Head of the CfACS IoT Laboratory and a Reader in Future Networks and Security with the Department of Computing and Mathematics, Manchester Metropolitan University. He has been a researcher and publisher in the field of big sensory data mining and visualization. He is a highly proficient, experienced, and professionally certified cybersecurity professional, specializing in threat analysis, and information and network security management. His research interests include highly decentralized algorithms, communication, and cross-layered solutions to Internet of Things, and wireless sensor networks.



Dr. Syed Ahmad Chan Bukhari is a semantic data scientist, a tech consultant and an entrepreneur. He received his PhD in computer science from University of New Brunswick, Canada. He is currently working as postdoc associate at Yale University, School of Medicine and at National Center for Biotechnology Information (NCBI) under scientific visitor’s program. At Yale, he is working as part of two NIH-funded consortia, the Center for Expanded Data Annotation and Retrieval (CEDAR, http://metadatacenter.org) and the Human Immunology Project Consortium (HIPC, http://www.immuneprofiling.org). Dr. Bukhari specific research efforts are concentrated on several core problems from the area of semantic data management. On the standards side, his focus is on the development of metadata and data standards development, and improving data submission and reuse through the development of methods that leverage ontologies and semantic web technologies. As part of the AIRR community (AIRR,http://airr.irmacs.sfu.ca) data standards working group, Dr. Bukhari with his colleagues have introduced an initial set of  ontology-aware metadata recommendations for publishing AIRR sequencing studies. On the application side, his research aims are providing non-technical users with scalable self-service access to data, typically distributed and heterogeneous. Semantic technologies, based on semantic data standards and automated reasoning, alleviate many data access-related challenges faced by biologists and clinicians, such as data fragmentation, necessity to combine data with computation and declarative knowledge in querying, and the difficulty of accessing data for non-technical users. As an entrepreneur, Dr. Bukhari and his team is working on the development of a collaborative annotation toolkit for radiologist. His startup scaai labs (http://scaailabs.com) was in top-ten innovators list of 2015 contest at sillicon valley (http://www.globaltechsymposium.com/innovators.html). His research and entrepreneurial work has  been picked by the CBC Canada, PakWired, and UNB News.