Sunday, May 25, 2025

πŸ”„ PostgreSQL Database Cloning via Backup and Restore

 When working with PostgreSQL, there are many situations where you need to clone a database:

  • Creating a dev or staging environment from production

  • Testing migrations or data loads in isolation

  • Running experiments on a snapshot of real data

In this post, I’ll walk you through a clean, script-driven approach to PostgreSQL cloning using pg_dump and pg_restore — with a focus on multi-database supportdata-only restores, and sequence syncing.


🧩 Why Use Backup/Restore for Cloning?

Cloning using backup/restore is ideal when:

  • You can’t (or don’t want to) clone entire clusters

  • You're working across environments (e.g., prod → staging)

  • You want selective control over schema, data, or both

Compared to CREATE DATABASE ... TEMPLATE, this gives more flexibility and control.


πŸš€ Introducing pg_multi_backup_restore_interactive.sh

We wrote a shell script that allows you to:

  • Back up multiple PostgreSQL databases

  • Restore them (with schema or just data)

  • Target new or existing databases

  • Handle foreign key and sequence issues

Let’s walk through the process.


πŸ“¦ Backup Multiple Databases

The script uses pg_dump in custom format (-Fc) to back up databases:

pg_dump -Fc -f ./pg_backups/mydb.dump mydb


This format is preferred because it supports parallel restore, filtering, and more control.

The script will prompt you for:

  • Host, port, and user

  • List of source databases

  • Backup directory

All dumps are saved to individual files like mydb.dumpanotherdb.dump, etc.



πŸ” Restore into New or Existing Databases

The script then lets you choose:

  • Full restore (schema + data)

  • Data-only restore (e.g., when schema already exists)

✅ Data-Only Restore Use Case

This is great when you already have a DB created via migrations, but just want to inject live data from another environment.

It handles:

  • Truncating all user tables

  • Temporarily disabling foreign key constraints (session_replication_role = replica)

  • Using pg_restore --data-only


🧹 Foreign Key Constraints and Truncation

One big gotcha with data-only restores: foreign key constraint violations.

Example:


ERROR:  insert or update on table "employees" violates foreign key constraint "employees_job_id_fkey"


This happens when you insert data into a child table (employees) before its referenced parent table (jobs) has valid entries.

To avoid this:

  • We truncate all tables

  • Temporarily disable foreign keys and triggers during the restore



πŸ§ͺ Real-World Use Case

Imagine you want to clone prod_customers and prod_orders into:

  • staging_customers

  • staging_orders


./pg_multi_backup_restore_interactive.sh

Provide:

  • prod_customers,prod_orders as the source

  • staging_customersstaging_orders as the target

  • Choose data-only restore

The script will:

  • Truncate all tables in target DBs

  • Load data from the corresponding .dump files

  • Reset all sequences to match the restored data

πŸ›‘️ Security Tips

  • Use .pgpass to avoid password prompts securely

  • Ensure the target DB has a recent schema before data-only restore

  • Only superusers can disable triggers (--disable-triggers or session_replication_role = replica)


πŸ“„ Full Script Available

The script is fully interactive and flexible. It’s designed to be used in devops pipelines or manually for one-off migrations.

Want to grab the script?

https://github.com/dev-sincode/postgres_clone


Friday, October 4, 2024

DBDEV Util package

CREATE OR REPLACE PACKAGE finance_util_pkg IS

  -- 1. Get current timestamp
  FUNCTION get_current_timestamp RETURN TIMESTAMP;

  -- 2. Get financial quarter for a given date
  FUNCTION get_financial_quarter(p_date DATE) RETURN NUMBER;

  -- 3. Calculate simple interest
  FUNCTION calculate_simple_interest(p_principal NUMBER,
p_rate NUMBER,
                                     p_time NUMBER) RETURN NUMBER;

  -- 4. Calculate compound interest
  FUNCTION calculate_compound_interest(p_principal NUMBER,
                                       p_rate NUMBER,
p_time NUMBER,
p_n NUMBER) RETURN NUMBER;

  -- 5. Generate unique transaction ID
  FUNCTION generate_transaction_id RETURN VARCHAR2;

  -- 6. Check if a date is a weekend
  FUNCTION is_weekend(p_date DATE) RETURN VARCHAR2;

  -- 7. Calculate the number of business days between two dates
  FUNCTION business_days_between(p_start_date DATE,
p_end_date DATE) RETURN NUMBER;

  -- 8. Convert amount to words (e.g., 123 -> 'One Hundred Twenty-Three')
  FUNCTION convert_number_to_words(p_number NUMBER) RETURN VARCHAR2;

  -- 9. Get last day of the month
  FUNCTION get_last_day_of_month(p_date DATE) RETURN DATE;

  -- 10. Get the difference in years between two dates
  FUNCTION years_between(p_start_date DATE, p_end_date DATE) RETURN NUMBER;

  -- 11. Calculate loan EMI (Equated Monthly Installment)
  FUNCTION calculate_emi(p_principal NUMBER,
p_rate NUMBER,
p_time_in_years NUMBER) RETURN NUMBER;

  -- 12. Validate an IBAN number (International Bank Account Number)
  FUNCTION validate_iban(p_iban VARCHAR2) RETURN VARCHAR2;

  -- 13. Calculate monthly savings for retirement target
  FUNCTION calculate_retirement_savings(p_target NUMBER,
p_years NUMBER,
p_rate NUMBER) RETURN NUMBER;

  -- 14. Get the fiscal year for a given date
  FUNCTION get_fiscal_year(p_date DATE) RETURN VARCHAR2;

  -- 15. Convert currency (based on a rate)
  FUNCTION convert_currency(p_amount NUMBER, p_rate NUMBER) RETURN NUMBER;

  -- 16. Get total working hours between two timestamps (excluding weekends)
  FUNCTION working_hours_between(p_start_ts TIMESTAMP,
p_end_ts TIMESTAMP) RETURN NUMBER;

  -- 17. Generate a random account number
  FUNCTION generate_account_number RETURN VARCHAR2;

  -- 18. Calculate VAT for a given amount
  FUNCTION calculate_vat(p_amount NUMBER, p_vat_rate NUMBER) RETURN NUMBER;

  -- 19. Calculate future value of investment (FVI)
  FUNCTION calculate_future_value(p_present_value NUMBER,
p_rate NUMBER,
p_time NUMBER) RETURN NUMBER;

  -- 20. Get next working day
  FUNCTION get_next_working_day(p_date DATE) RETURN DATE;

END finance_util_pkg;
/

CREATE OR REPLACE PACKAGE BODY finance_util_pkg IS

  -- 1. Get current timestamp
  FUNCTION get_current_timestamp RETURN TIMESTAMP IS
  BEGIN
    RETURN SYSTIMESTAMP;
  END get_current_timestamp;

  -- 2. Get financial quarter for a given date
  FUNCTION get_financial_quarter(p_date DATE) RETURN NUMBER IS
  BEGIN
    RETURN TO_NUMBER(TO_CHAR(p_date, 'Q'));
  END get_financial_quarter;

  -- 3. Calculate simple interest
  FUNCTION calculate_simple_interest(p_principal NUMBER,
p_rate NUMBER,
p_time NUMBER) RETURN NUMBER IS
  BEGIN
    RETURN (p_principal * p_rate * p_time) / 100;
  END calculate_simple_interest;

  -- 4. Calculate compound interest
  FUNCTION calculate_compound_interest(p_principal NUMBER,
p_rate NUMBER,
p_time NUMBER,
p_n NUMBER) RETURN NUMBER IS
  BEGIN
    RETURN p_principal * POWER((1 + p_rate / (p_n * 100)), p_n * p_time);
  END calculate_compound_interest;

  -- 5. Generate unique transaction ID
  FUNCTION generate_transaction_id RETURN VARCHAR2 IS
  BEGIN
    RETURN 'TXN-' || TO_CHAR(SYSDATE, 'YYYYMMDDHH24MISS') ||
DBMS_RANDOM.VALUE(1, 1000000);
  END generate_transaction_id;

  -- 6. Check if a date is a weekend
  FUNCTION is_weekend(p_date DATE) RETURN VARCHAR2 IS
  BEGIN
    IF TO_CHAR(p_date, 'D') IN ('6', '7') THEN
      RETURN 'Yes';
    ELSE
      RETURN 'No';
    END IF;
  END is_weekend;

  -- 7. Calculate the number of business days between two dates
  FUNCTION business_days_between(p_start_date DATE,
p_end_date DATE)
RETURN NUMBER IS
    l_days NUMBER := 0;
  BEGIN
    FOR r IN (SELECT p_start_date + LEVEL - 1 AS current_date
              FROM dual
              CONNECT BY LEVEL <= p_end_date - p_start_date + 1) LOOP
      IF TO_CHAR(r.current_date, 'D') NOT IN ('6', '7') THEN
        l_days := l_days + 1;
      END IF;
    END LOOP;
    RETURN l_days;
  END business_days_between;

  -- 8. Convert amount to words (simplified version)
  FUNCTION convert_number_to_words(p_number NUMBER) RETURN VARCHAR2 IS
  BEGIN
    -- For simplicity, this is a placeholder. Implementing number-to-words
-- requires a more comprehensive solution.
    RETURN TO_CHAR(TO_DATE(p_number, 'J'), 'JSP');
  END convert_number_to_words;

  -- 9. Get last day of the month
  FUNCTION get_last_day_of_month(p_date DATE) RETURN DATE IS
  BEGIN
    RETURN LAST_DAY(p_date);
  END get_last_day_of_month;

  -- 10. Get the difference in years between two dates
  FUNCTION years_between(p_start_date DATE, p_end_date DATE) RETURN NUMBER IS
  BEGIN
    RETURN MONTHS_BETWEEN(p_end_date, p_start_date) / 12;
  END years_between;

  -- 11. Calculate loan EMI (Equated Monthly Installment)
  FUNCTION calculate_emi(p_principal NUMBER,
p_rate NUMBER,
p_time_in_years NUMBER) RETURN NUMBER IS
    l_rate_per_month NUMBER := p_rate / (12 * 100);
    l_time_in_months NUMBER := p_time_in_years * 12;
  BEGIN
    RETURN (p_principal * l_rate_per_month *
POWER(1 + l_rate_per_month, l_time_in_months)) /
           (POWER(1 + l_rate_per_month, l_time_in_months) - 1);
  END calculate_emi;

  -- 12. Validate an IBAN number (simple checksum validation)
  FUNCTION validate_iban(p_iban VARCHAR2) RETURN VARCHAR2 IS
  BEGIN
    -- Simplified validation logic
    IF LENGTH(p_iban) BETWEEN 15 AND 34 THEN
      RETURN 'Valid';
    ELSE
      RETURN 'Invalid';
    END IF;
  END validate_iban;

  -- 13. Calculate monthly savings for retirement target
  FUNCTION calculate_retirement_savings(p_target NUMBER,
p_years NUMBER,
p_rate NUMBER) RETURN NUMBER IS
    l_rate_per_month NUMBER := p_rate / (12 * 100);
    l_months NUMBER := p_years * 12;
  BEGIN
    RETURN p_target / (POWER(1 + l_rate_per_month, l_months) - 1);
  END calculate_retirement_savings;

  -- 14. Get the fiscal year for a given date
  FUNCTION get_fiscal_year(p_date DATE) RETURN VARCHAR2 IS
  BEGIN
    RETURN TO_CHAR(ADD_MONTHS(p_date, 6), 'YYYY');
  END get_fiscal_year;

  -- 15. Convert currency (using a given rate)
  FUNCTION convert_currency(p_amount NUMBER, p_rate NUMBER) RETURN NUMBER IS
  BEGIN
    RETURN p_amount * p_rate;
  END convert_currency;

  -- 16. Get total working hours between two timestamps (excluding weekends)
  FUNCTION working_hours_between(p_start_ts TIMESTAMP,
p_end_ts TIMESTAMP) RETURN NUMBER IS
  BEGIN
    -- Simplified for working hours between two dates
-- (9 AM - 6 PM logic can be added as per requirements)
    RETURN EXTRACT(DAY FROM (p_end_ts - p_start_ts)) * 8;
  END working_hours_between;

  -- 17. Generate a random account number
  FUNCTION generate_account_number RETURN VARCHAR2 IS
  BEGIN
    RETURN 'ACCT-' || DBMS_RANDOM.STRING('X', 10);
  END generate_account_number;

  -- 18. Calculate VAT for a given amount
  FUNCTION calculate_vat(p_amount NUMBER, p_vat_rate NUMBER) RETURN NUMBER IS
  BEGIN
    RETURN p_amount * (p_vat_rate / 100);
  END calculate_vat;

  -- 19. Calculate future value of investment (FVI)
  FUNCTION calculate_future_value(p_present_value NUMBER,
p_rate NUMBER,
p_time NUMBER) RETURN NUMBER IS
  BEGIN
    RETURN p_present_value * POWER((1 + p_rate / 100), p_time);
  END calculate_future_value;

  -- 20. Get the next working day
  FUNCTION get_next_working_day(p_date DATE) RETURN DATE IS
  BEGIN
    IF TO_CHAR(p_date, 'D') = '6' THEN
      RETURN p_date + 2;
    ELSIF TO_CHAR(p_date, 'D') = '7' THEN
      RETURN p_date + 1;
    ELSE
      RETURN p_date;
    END IF;
  END get_next_working_day;

END finance_util_pkg;
/

Transitioning from SQL to NoSQL Databases: What You Need to Know

 In today’s data-driven world, the traditional SQL (relational) databases are no longer the only option for developers and data engineers. With the explosion of unstructured data, scalability challenges, and the need for flexibility, NoSQL databases have gained popularity.

If you're coming from a SQL background and want to transition to NoSQL, this guide will walk you through the key concepts, differences, and how to make the shift easier. We’ll also explore a few real-world examples to help you get a practical understanding of how SQL and NoSQL differ.


1. SQL vs NoSQL: The Fundamental Differences

Relational vs Non-relational

  • SQL databases are based on a relational model, which means data is organized into tables (rows and columns) with defined relationships. This structure is excellent for structured data, where relationships are essential, and data consistency is paramount.
  • NoSQL databases (Not Only SQL) are non-relational, meaning they can store and retrieve data that doesn't necessarily adhere to the tabular structure. This is particularly useful for unstructured or semi-structured data such as documents, graphs, and key-value pairs.

Schema Flexibility

  • SQL databases use a fixed schema, meaning the structure of the database (tables, columns, etc.) is predefined. This can become restrictive if your data evolves over time.
  • NoSQL databases offer dynamic schema capabilities, allowing data models to be more flexible and adapt to changes without the need for altering the structure every time you add new data fields.

ACID vs BASE

  • SQL databases adhere to ACID (Atomicity, Consistency, Isolation, Durability) properties, ensuring high data integrity and consistency.
  • NoSQL databases tend to follow the BASE (Basically Available, Soft state, Eventual consistency) model, where data consistency is relaxed for the sake of scalability and availability.

Example:

SQL:

CREATE TABLE Users (
id INT PRIMARY KEY, name VARCHAR(100), email VARCHAR(100), created_at TIMESTAMP ); INSERT INTO Users (id, name, email, created_at) VALUES (1, 'John Doe', 'john@example.com', NOW());

NoSQL (MongoDB):

db.users.insertOne({
_id: 1, name: "John Doe", email: "john@example.com", created_at: new Date() });

Notice the difference in structure: SQL enforces a predefined schema, whereas NoSQL (MongoDB) allows the flexibility of adding fields as needed.

2. Types of NoSQL Databases

To transition effectively, it’s essential to understand the different types of NoSQL databases:

  • Document-based (e.g., MongoDB): Stores data as documents (usually JSON or BSON). It’s best for scenarios where you need flexibility in the schema and store hierarchical data.
  • Key-Value Stores (e.g., Redis, DynamoDB): Data is stored as key-value pairs, making it useful for fast retrievals based on a specific key.
  • Column-based (e.g., Apache Cassandra, HBase): Designed for reading and writing large amounts of data across distributed systems, often used for big data applications.
  • Graph-based (e.g., Neo4j): Data is represented as nodes and edges, which is useful when working with highly interconnected data such as social networks.

When to Choose Each:

  • Document-based (MongoDB): Use when you need flexible schemas and frequently evolving data (e.g., content management systems, IoT data).
  • Key-Value (Redis): Great for caching, session management, and scenarios requiring low latency.
  • Column-based (Cassandra): Best for handling massive datasets in a distributed system, such as real-time analytics.
  • Graph-based (Neo4j): Ideal for applications dealing with complex relationships, such as fraud detection or recommendation engines.

3. Migrating from SQL to NoSQL: Key Considerations

1. Understand the Data Models

In SQL, data is normalized and split into different tables to eliminate redundancy (normalization). When transitioning to NoSQL, you'll need to denormalize your data to fit into collections (in MongoDB) or key-value pairs (in Redis). NoSQL databases often prioritize performance over the strict relationships SQL databases maintain.

Example:

In SQL, you might split a blog post into separate tables for posts, authors, and comments. In MongoDB, this could all be stored as a single document:

{
"title": "My First Post", "author": { "name": "Jane Doe", "email": "jane@example.com" }, "comments": [ {"text": "Great post!", "author": "User1"}, {"text": "Thanks for sharing", "author": "User2"} ] }

This allows for faster reads since all related data is stored together, but at the expense of storage efficiency.

2. Consider Querying Differences

The way you query data in SQL (using joins and complex queries) is quite different from NoSQL. While SQL uses JOIN statements to retrieve related data across tables, NoSQL databases (like MongoDB) avoid joins for performance reasons.

SQL Example:

SELECT orders.id, customers.name
FROM orders JOIN customers ON orders.customer_id = customers.id;

NoSQL (MongoDB) Example:

Instead of performing a join, data would already be embedded, allowing for a simple query:

db.orders.find({
customer: { name: "John Doe" } });

3. Indexing and Performance Optimization

Indexing works differently in NoSQL databases. In MongoDB, for example, you can create indexes on fields much like you would in a SQL database, but how the database uses the index internally may vary. Ensure you understand how to optimize queries based on the NoSQL database you’re using.

MongoDB Index Example:

db.users.createIndex({ email: 1 });

4. Real-World Use Cases of SQL to NoSQL Migration

1. eBay (MySQL to MongoDB):

eBay needed a scalable solution to handle its vast and growing amount of product listings and user data. While MySQL could handle transactions, MongoDB allowed eBay to store large, unstructured product data efficiently, thanks to its flexible document-based model.

2. Netflix (Cassandra for Real-Time Data):

Netflix transitioned from Oracle (a relational database) to Apache Cassandra for their global user base. The need for a distributed, fault-tolerant system capable of handling large data volumes led them to choose Cassandra. Its column-based, distributed architecture allowed Netflix to maintain high availability and scalability.

3. Twitter (MySQL and NoSQL Combination):

Twitter uses a combination of MySQL and various NoSQL solutions (e.g., Redis, Cassandra). MySQL handles user data and relationships, while Redis and Cassandra power real-time feeds and message distribution to millions of users.

5. Tips for a Successful Transition

  • Start with a hybrid approach: In many cases, you don’t need to fully abandon SQL. Companies like Twitter and Facebook still use a combination of SQL and NoSQL to meet their needs.

  • Understand your data access patterns: Analyze how your application reads and writes data. If you have frequent reads with relatively few updates, a NoSQL solution might be a good fit.

  • Consider your scaling needs: NoSQL databases shine when you need horizontal scalability. If your SQL database is struggling with massive workloads, transitioning to NoSQL may be the answer.


Wednesday, July 31, 2024

Advanced SQL Techniques: Optimizing Queries for Performance

 

Introduction

Slow queries can be a database administrator's worst nightmare. They can drastically impact application performance, frustrate users, and even bring systems to a halt. This is where query optimization comes in. By understanding how queries are executed and employing advanced techniques, you can significantly improve the speed and efficiency of your database.

Understanding Query Execution

Before diving into optimization, it's essential to grasp the fundamental steps involved in query execution:

  1. Parsing: The database breaks down the SQL statement into components it understands.
  2. Optimization: The query optimizer creates an execution plan, determining the most efficient way to retrieve the data.
  3. Execution: The database follows the execution plan to retrieve and process data.

While the optimizer is a powerful tool, it's not infallible. It relies on statistics and assumptions, which can sometimes lead to suboptimal plans.

Advanced SQL Optimization Techniques

Indexing

Indexes are data structures that accelerate data retrieval. They create a searchable copy of specific columns, allowing the database to quickly locate rows without scanning the entire table.

  • Types of Indexes: B-tree, function-based, bitmap
  • Creating Effective Indexes: Identify frequently queried columns, consider index selectivity, and balance index creation with maintenance overhead.

Query Rewriting

Often, inefficient query structures can be improved through rewriting.

  • Replace subqueries with joins: This can often lead to better performance, especially for correlated subqueries.
  • Eliminate unnecessary calculations: Remove calculations that don't impact the result set.
  • Reduce data volume: Use techniques like filtering and limiting data early in the query.

Execution Plans

Understanding execution plans is crucial for identifying performance bottlenecks.

  • Reading Execution Plans: Learn to interpret the plan's structure, operations, and costs.
  • Identifying Bottlenecks: Look for full table scans, sorts, and large amounts of data transferred.
  • Optimization Based on Plans: Create or modify indexes, rewrite queries, or gather statistics.

Window Functions

Window functions provide powerful capabilities for calculations across rows without subqueries. They can often improve query performance and readability.

  • Common Window Functions: RANK, DENSE_RANK, ROW_NUMBER, LEAD, LAG, and aggregate functions with OVER clause.
  • Performance Benefits: Utilize window functions to replace correlated subqueries or complex joins.

Common Table Expressions (CTEs)

CTEs enhance readability and can sometimes improve performance by simplifying complex queries.

  • CTE Structure: Define named result sets that can be referenced multiple times in the main query.
  • Performance Considerations: Use CTEs judiciously, as they can introduce overhead in some cases.

Case Studies and Real-World Examples

  • Case Study 1: A slow-running report was optimized by creating a composite index on frequently filtered columns.
  • Case Study 2: A complex query with multiple subqueries was rewritten using joins and CTEs, resulting in a significant performance boost.

Best Practices and Tips

  • Regularly monitor query performance.
  • Gather and analyze statistics.
  • Test changes in a controlled environment.
  • Consider indexing strategies carefully.
  • Continuously learn and improve your skills.

Conclusion

Optimizing SQL queries is an ongoing process that requires a combination of knowledge, tools, and experience. By mastering advanced techniques and following best practices, you can dramatically enhance the performance of your database applications.

Remember: Every database environment is unique. Experimentation and fine-tuning are essential for achieving optimal results.

Improving Database Performance: Indexing, Partitioning, and Monitoring

 A high-performing database is crucial for any application's success. Slow query response times, resource bottlenecks, and system instability can significantly impact user experience and business operations. This blog post will explore three fundamental techniques to enhance database performance: indexing, partitioning, and monitoring.

Indexing

Indexing is a powerful method to accelerate data retrieval. It creates a searchable copy of specific columns, allowing the database to quickly locate rows without scanning the entire table.

Key benefits of indexing:

  • Improved query performance: Indexes significantly reduce query execution time.
  • Enhanced application responsiveness: Faster data retrieval leads to improved user experience.
  • Reduced system load: Optimized indexes can decrease resource utilization.

Best practices for indexing:

  • Identify frequently accessed columns: Create indexes on columns used in WHERE, JOIN, ORDER BY, and GROUP BY clauses.
  • Consider index selectivity: High selectivity improves index efficiency.
  • Balance index creation with maintenance overhead: Excessive indexing can impact performance.
  • Monitor index usage: Regularly review index statistics and remove unused indexes.

Partitioning

Partitioning divides large tables into smaller, more manageable segments based on specific criteria. This technique offers several advantages:

  • Improved query performance: Queries can be directed to specific partitions, reducing data scanned.
  • Enhanced data management: Partitioning simplifies data loading, unloading, and archiving.
  • Improved backup and recovery: Smaller partitions can be backed up and restored more efficiently.

Common partitioning strategies:

  • Range partitioning: Divides data based on a range of values in a column (e.g., date, number).
  • Hash partitioning: Distributes data evenly across partitions based on a hash function.
  • List partitioning: Divides data based on values in a list.

Considerations for partitioning:

  • Partitioning key selection: Choose a column that effectively distributes data across partitions.
  • Partition maintenance: Regularly monitor and manage partitions to prevent performance degradation.
  • Partition pruning: Utilize partition pruning to optimize query performance.

Monitoring

Continuous monitoring is essential to identify performance issues and optimize the database. Key performance indicators (KPIs) to track include:

  • Query response time: Measure the time taken to execute queries.
  • CPU utilization: Monitor CPU usage to detect bottlenecks.
  • I/O wait time: Assess disk I/O performance.
  • Memory usage: Track memory consumption to avoid swapping.
  • Database locks: Identify contention issues.

Tools for database monitoring:

  • Database-specific monitoring tools: Oracle Enterprise Manager, SQL Server Management Studio, MySQL Performance Schema.
  • Third-party monitoring solutions: Nagios, Zabbix, Datadog.

Proactive monitoring:

  • Set performance thresholds: Define acceptable performance levels.
  • Implement alerting: Notify administrators of performance issues.
  • Analyze trends: Identify patterns and potential problems.

By effectively combining indexing, partitioning, and monitoring, you can significantly improve database performance, optimize resource utilization, and enhance overall system reliability. Remember, database optimization is an ongoing process that requires continuous evaluation and adjustment.

Friday, February 10, 2023

PL/SQL Table Function Examples

 Example 1 -

CREATE OR REPLACE FUNCTION squares (p_limit IN NUMBER) RETURN SYS.ODCINUMBERLIST IS l_squares SYS.ODCINUMBERLIST := SYS.ODCINUMBERLIST(); BEGIN FOR i IN 1..p_limit LOOP l_squares.EXTEND; l_squares(l_squares.COUNT) := i * i; END LOOP; RETURN l_squares; END;


SELECT column_value FROM TABLE(squares(10));

----------------------------------------------

1 4 9 16 25 36 49 64 81 100


Example 2 -

CREATE OR REPLACE FUNCTION get_employees_salaries RETURN SYS.ODCIVARCHAR2LIST AS emp_salaries SYS.ODCIVARCHAR2LIST := SYS.ODCIVARCHAR2LIST(); BEGIN SELECT ename || ': ' || sal INTO emp_salaries FROM emp; RETURN emp_salaries; END; /

SELECT * FROM TABLE(get_employees_salaries);




πŸ”„ PostgreSQL Database Cloning via Backup and Restore

 When working with PostgreSQL, there are many situations where you need to  clone a database : Creating a dev or staging environment from pr...