Top 20 Most Popular Data Modelling Interview Questions & Answers [For Beginners & Experienced] | upGrad blog
Top 20 Most Popular Data Modelling Interview Questions & Answers [For Beginners & Experienced]
by Rohit Sharma
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.
JUN 10, 2021
Home > Data Science > Top 20 Most Popular Data Modelling Interview Questions & Answers [For Beginners & Experienced]
Data Science is one of the most lucrative career fields in the present job market. And as competition picks up, job interviews are also getting more innovative by the day. Employers want to test candidates’ conceptual knowledge and practical understanding of relevant subjects and technology tools. In this blog, we will discuss some relevant data modelling interview questions to help you make a powerful first impression!
Top Data Modelling Interview Questions and Answers
Here are 20 data modelling interview questions along with the sample answers that will take you through the beginner, intermediate, and advanced levels of the topic.
1. What is Data Modeling? List the types of data models.
Data modelling involves creating a representation (or model) of the data available and storing it in a database.
A data model comprises entities (such as customers, products, manufacturers, and sellers) that give rise to objects and attributes that users want to track. For instance, a Customer Name is an attribute of the Customer entity. These details further take the shape of a table in a database.
There are three basic types of data models, namely:
- Conceptual: Data architects and business stakeholders create this model to organise, scope, and define business concepts. It dictates what a system should contain.
- Logical: Put together by data architects and business analysts, this model maps the technical rules and data structures, thus determining the system’s implementation regardless of a database management system or DBMS.
- Physical: Database architects and developers create this model to describe how the system should operate with a specific DBMS.
2. What is a Table? Explain Fact and Fact Table.
A table holds data in rows (horizontal alignments) and columns (vertical alignments). Rows are also known as records or tuples, whereas columns may be referred to as fields.
A fact is quantitative data like “net sales” or “amount due”. A fact table stores numerical data as well as some attributes from dimensional tables.
3. What do you mean by (i) dimension (ii) granularity (iv) data sparsity (v) hashing (v) database management system?
(i) Dimensions represent qualitative data such as class and product. Therefore, a dimensional table containing product data will have attributes like the product category, product name, etc.
(ii) Granularity refers to the level of information stored in a table. It can be high or low, with the tables containing transaction-level data and fact tables, respectively.
(iii) Data sparsity means the number of empty cells in a database. In other words, it states how much data we have for a particular entity or dimension in the data model. Insufficient information leads to large databases as more space is required to save the aggregations.
(iv) The hashing technique helps search index values for retrieving desired data. It is used to calculate the direct location of data records with the help of index structures.
(v) A Database Management System (DBMS) is software comprising a group of programs for manipulating the database. Its primary purpose is to store and retrieve user data.
4. Define Normalisation. What is its purpose?
The normalisation technique divides larger tables into smaller ones, linking them using different relationships. It organises tables in a way that minimises the dependency and redundancy of the data.
There can be five types of normalisations, namely:
- First normal form
- Second normal form
- Third normal form
- Boyce-Codd fourth normal form
- Fifth normal form
5. What is the utility of denormalisation in data modelling?
Denormalisation is used to construct a data warehouse, especially in situations having extensive involvement of tables. This strategy is used on a previously normalised database.
6. Elucidate the differences between primary key, composite primary key, foreign key, and surrogate key.
A primary key is a mainstay in every data table. It denotes a column or a group of columns and lets you identify a table’s rows. The primary key value cannot be null. When more than one column is applied as a part of the primary key, it is known as a composite primary key.
On the other hand, a foreign key is a group of attributes that allows you to link parent and child tables. The foreign key value in the child table is referenced as the primary key value in the parent table.
A surrogate key is used to identify each record in those situations where the users do not have a natural primary key. This artificial key is typically represented as an integer and does not lend any meaning to the data contained in the table.
7. Compare the OLTP system with the OLAP process.
OLTP is an online transactional system that relies on traditional databases to perform real-time business operations. The OLTP database has normalised tables, and the response time is usually within milliseconds.
Conversely, OLAP is an online process meant for data analysis and retrieval. It is designed for analysing large volumes of business measures by category and attributes. Unlike OLTP, OLAP uses a data warehouse, non-normalised tables and operates with a response time of seconds to minutes.
8. List the standard database schema designs.
A schema is a diagram or illustration of data relationships and structures. There are two schema designs in data modelling, namely star schema and snowflake schema.
- A star schema comprises a central fact table and several dimension tables that are connected to it. The primary key of the dimension tables is a foreign key in the fact table.
- A snowflake schema has the same fact table as the star schema but at a higher level of normalisation. The dimension tables are normalised or have multiple layers, which resembles a snowflake.
9. Explain discrete and continuous data.
Discrete data finite and defined, such as gender, telephone numbers, etc. On the other hand, continuous data changes in an ordered manner; for example, age, temperature, etc.
10. What are sequence clustering and time series algorithms?
A sequence clustering algorithm collects:
- Sequences of data having events, and
- Related or similar paths.
Time series algorithms predict continuous values in data tables. For instance, it can forecast the sales and profit figures based on employee performance over time.
Now that you have brushed up your basics, here are ten more frequently asked data modelling questions for your practice!
11. Describe the process of data warehousing.
Data warehousing connects and manages raw data from heterogeneous sources. This data collection and analysis process allows business enterprises to get meaningful insights from varied locations in one place, which forms the core of Business Intelligence.
12. What are the key differences between a data mart and a data warehouse?
A data mart enables tactical decisions for business growth by focusing on a single business area and following a bottom-up model. On the other hand, a data warehouse facilitates strategic decision-making by emphasising multiple areas and data sources and adopting a top-down approach.
13. Mention the types of critical relationships found in data models.
Critical relationships can be categorised into:
- Identifying: Connects parent and child tables with a thick line. The child table’s reference column is a part of the primary key.
- Non-identifying: The tables are connected by a dotted line, signifying that the child table’s reference column is not a part of the primary key.
- Sef-recursive: A standalone column of the table is connected to the primary key in a recursive relationship.
14. What are some common errors that you encounter while modelling data?
It can get tricky to build broad data models. The chances of failure also increase when tables run higher than 200. It is also critical for the data modeller to have adequate workable knowledge of the business mission. Otherwise, the data models run the risk of going haywire.
Unnecessary surrogate keys pose another problem. They must not be used sparingly, but only when natural keys cannot fulfil the primary key’s role.
One can also encounter situations of inappropriate denormalisation where maintaining data redundancy can become a considerable challenge.
15. Discuss hierarchical DBMS. What are the drawbacks of this data model?
A hierarchical DBMS stores data in tree-like structures. The format uses the parent-child relationship where a parent may have many children, but a child can only have one parent.
The drawbacks of this model include:
- Lack of flexibility and adaptability to changing business needs;
- Issues in inter-departmental, inter-agency, and vertical communications;
- Problems of disunity in data.
16. Detail two types of data modelling techniques.
Entity-Relationship (E-R) and Unified Modeling Language (UML) are the two standard data modelling techniques.
E-R is used in software engineering to produce data models or diagrams of information systems. UML is a general-purpose language for database development and modelling that helps visualise the system design.
17. What is a junk dimension?
A junk dimension is born by combining low-cardinality attributes (indicators, booleans, or flag values) into one dimension. These values are removed from other tables and then grouped or ”junked” into an abstract dimension table, which is a method of initiating ‘Rapidly Changing Dimensions’ within data warehouses.
18. State some popular DBMS software.
MySQL, Oracle, Microsoft Access, dBase, SQLite, PostgreSQL, IBM DB2, and Microsoft SQL Server are some of the most-used DBMS tools in the modern-day software development arena.
19. What are the advantages and disadvantages of using data modelling?
Pros of using data mining:
- Business data can be better managed by normalising and defining attributes.
- Data mining allows the integration of data across systems and reduces redundancy.
- It makes way for an efficient database design.
- It enables inter-departmental cooperation and teamwork.
- It allows easy access to data.
Cons of using data modelling:
- Data modelling can sometimes make the system more complex.
- It has a limited structural dependency.
20. Explain data mining and predictive modelling analytics.
Data mining is a multi-disciplinary skill. It involves applying knowledge from fields like Artificial Intelligence (AI), Machine Learning (ML), and Database Technologies. Here, practitioners are concerned with uncovering the mysteries of data and discovering previously unknown relationships.
Predictive modelling refers to testing and validating models that can predict specific outcomes. This process has several applications in AI, ML, and Statistics.