OCI Networking Series Part 4 – Advanced DRG & Multi-VCN Architectures Explained

Objective:  Scaling Networking in Large Deployments


As enterprises adopt Oracle Cloud Infrastructure (OCI) for mission-critical workloads, networking design must scale to handle multi-VCN, multi-region, and hybrid connectivity requirements. At the core of such architectures lies the Dynamic Routing Gateway (DRG) — a powerful, software-defined router that enables seamless communication between Virtual Cloud Networks (VCNs), on-premises networks, and even other cloud providers.

In this blog, we’ll explore advanced DRG use cases and multi-VCN architectures including attachments, Remote Peering, DRG attachments, Hub-and-Spoke design, and transit routing.


What is DRG, Enhanced DRG


The Dynamic Routing Gateway (DRG) in Oracle Cloud Infrastructure (OCI) is a regional virtual router that serves as the backbone for hybrid and multi-VCN connectivity. It provides a central point of control, enabling communication between Virtual Cloud Networks (VCNs), on-premises environments through IPSec VPN or FastConnect, and even cross-region VCNs using remote peering.

With the introduction of the Enhanced DRG (DRGv2), OCI has greatly simplified and strengthened network design. The enhanced DRG supports multiple types of attachments (VCNs, FastConnect, VPN, Local and Remote Peering), allows per-attachment route tables for fine-grained traffic control, and delivers greater scalability to support complex topologies such as hub-and-spoke architectures. This makes it easier for enterprises to design secure, flexible, and highly available hybrid cloud networks.

DRG Attachments


A DRG (Dynamic Routing Gateway) acts as the central connectivity hub in OCI. It supports multiple attachments that define the type of connection.

VCN Attachments → Connect a VCN to the DRG for routing traffic in/out of that VCN.
Remote Peering Connection (RPC) → Connects VCNs across different OCI regions securely over the OCI backbone.
FastConnect & IPsec VPN Attachments → Hybrid connectivity options linking on-premises datacenters or third-party clouds to OCI.

Each attachments has its own set of rules or import route distribution attached to it which controls the flow of the traffic. By default, there are 2 DRG route tables, one for for VCN attachment and other one for IPsec and fastconnect attachments.


In the above diagram, all the VCN attachments s are shown. Each attachment can have same or different route table. Keeping separate route table would be easy to manage, isolate and setup different routes for different attachments. If the setup is small and everything is allowed from each attachment to all then default route table makes more sense. But as a best practice, keeping separate RT helps in long run.

Remote Peering Connections (RPC)


When enterprises expand across regions, Remote Peering Connections (RPC) provide secure, low-latency connectivity between two VCNs in different OCI regions.
  • Uses OCI’s private backbone network (not public internet).
  • Supports east-west traffic flow across regions.
  • Useful for DR setups (e.g., replicating databases from Mumbai region to Hyderabad region).
Each region has its own DRG and both DRG are connected through RPC connection at each side. A VM in a VCN from primary region can reach out to a VM to other region VCN compute. The connection goes through VM -> DRG -> RPC-1 -> RPC-2 -> DRG -> Compute.


Above diagram shows, the Remote peering connection is enabled between primary and secondary region and mentioned all the components required for its setup. Route table and security list controls the routes to the destination and security list restrict the ports.

πŸ•Έ️ Hub-and-Spoke Architecture Design


Enterprises often manage dozens of VCNs across departments, projects, or business units. Connecting all VCNs in a mesh would be complex and unscalable. A dynamic routing gateway (DRG) allows you to connect up to 300 virtual cloud networks (VCNs) and helps to simplify the overall architecture, security list and route table configuration, and to simplify security policy management by advertising Oracle cloud identifiers (OCIDs) through the DRG. Instead, OCI recommends a Hub-and-Spoke model:

Hub VCN → Contains the DRG, shared services (DNS, logging, monitoring).
Spoke VCNs → Application-specific or environment specific VCNs.

Traffic between spokes flows via the hub DRG. 

Advantages:
✔ Centralized control
✔ Simplified routing
✔ Cost-efficient scaling

The below diagram, shows Hub VCN has network firewall and a load balancer, so each traffic from on-premises first goes to the Hub VCN as firewall inspect the traffic in HUB VCN subnet. Then it will go through the respective VCN. Even the reverse traffic follows the same route.


πŸ”€ Transit Routing with DRG


Transit routing is a network architecture pattern that uses a central "hub" Virtual Cloud Network (VCN) to connect an on-premises network to multiple other VCNs, or to services, enabling traffic to "transit" through the hub to its final destination.
  • Transit Routing enables one network connection (like FastConnect or VPN) to be shared across multiple VCNs via the hub DRG.
  • Reduces redundant FastConnect/VPN setups.
  • Ensures centralized hybrid connectivity.
  • Simplified administration
  • scalability (Easier to add more VCNs)
As mentioned in above diagram as well, each traffic form on-premises going through the HUB-VCN so its a central point for the traffic entering to the OCI tenancy and then forward the traffic to its respective destination.


✅ Best Practices for Advanced DRG & Multi-VCN Architectures


When designing OCI DRG and multi-VCN architectures, a few guiding principles can make your network both resilient and scalable. Start with a hub-and-spoke model, positioning the DRG at the center to simplify connectivity across environments. Use separate VCNs to clearly segment workloads—applications, databases, or testing—and keep shared services such as DNS, firewalls, and monitoring in the hub VCN for easier management. Take advantage of DRG route tables to fine-tune how traffic moves between networks, and enable BGP peering to gain dynamic routing, faster failover, and better bandwidth utilization. 

Strengthen your security posture with a Zero Trust mindset, applying Security Lists and Network Security Groups (NSGs) for precise traffic control. Finally, don’t overlook monitoring and logging, which provide the visibility needed to maintain both performance and protection in complex deployments.

  • Use Hub-and-Spoke with Transit Routing for multi-VCN, multi-region scaling.
  • For intra-region VCN connectivity, use DRG attachments (enhanced DRG); for cross-region, use RPC.
  • Deploy redundant FastConnect or VPN tunnels for HA.
  • Keep routing policies simple and centralized at the DRG.
  • Apply security lists or NSGs carefully to control cross-VCN traffic.


Hands-On Practices to follow:- 


To gain the understanding of the above concepts, best approach is to try creating those architectures in free tier tenancy.

1. Create a Hub-and-Spoke Network with a DRG :- 

Provision one Hub VCN (10.0.0.0/16) and two Spoke VCNs (10.1.0.0/16 and 10.2.0.0/16).
Attach the Hub VCN and both Spoke VCNs to a DRG.
Configure DRG route tables so that traffic from Spoke-1 can reach Spoke-2 via the Hub.
Test: Launch a compute instance in each Spoke VCN and verify connectivity (ping/ssh).

2. Remote Peering Connection (RPC) – Cross-Region Connectivity

Create two VCNs in different regions (e.g., Ashburn and Phoenix).
Set up Remote Peering Gateways (RPGs) in both VCNs.
Establish a Remote Peering Connection.
Test: Deploy compute instances in both regions and verify private IP-to-private IP connectivity.



πŸ“Œ Conclusion

Advanced DRG and Multi-VCN architectures provide the backbone for scalable, resilient, and hybrid OCI deployments. With features like VCN attachments,  DRG attachments, RPC, Hub-and-Spoke, and Transit Routing, enterprises can securely connect workloads across regions, projects, and even other clouds — all while keeping routing centralized and efficient.


 

OCI Networking Series – Part 3: Hybrid Networking with IPSec & FastConnect

 Objective:

In this blog, we take a deep dive into hybrid networking in Oracle Cloud Infrastructure (OCI), focusing on how enterprises securely and reliably connect their on-premises data centers to OCI. We’ll explore IPSec VPN, FastConnect, Dynamic Routing Gateway (DRG), customer edge router considerations, and real-world architectures — including how OCI integrates with other cloud providers like Azure, AWS, and Google Cloud Platform (GCP).

Why Hybrid Networking is Critical in OCI

Hybrid networking enables enterprises to extend their existing on-premises infrastructure into the cloud while ensuring business continuity, data security, and scalability. Enterprises often face scenarios such as:

✔ Gradual migration of workloads
✔ Disaster recovery and backup strategies
✔ Secure communication across cloud and on-prem environments
✔ Regulatory compliance and data sovereignty needs
✔ Multi-cloud deployments for performance optimization

OCI’s networking services empower to design secure, high-performance, and cost-effective hybrid architectures while maintaining control over traffic flow, encryption, and routing.


The Role of DRG in Hybrid Networking

The Dynamic Routing Gateway (DRG) is the central virtual router that connects your OCI VCN with on-premises networks through IPSec VPN or FastConnect.

Key Functions

✔ Route propagation between OCI and customer networks
✔ Central management of hybrid traffic flows
✔ Integration with route tables and security controls
✔ Support for multiple attachments — VPN, FastConnect, and VCN peering


Customer Edge Router – Essential Configuration Considerations

The on-premises router must meet certain standards to establish and maintain reliable hybrid connections.

Must-Have Features

✔ Support for IPSec and IKEv2 protocols
✔ Dual tunnel configuration for high availability
✔ Sufficient encryption processing capacity
BGP (Border Gateway Protocol) support for dynamic route exchange
✔ Compatibility with provider-specific interfaces for FastConnect
✔ Security configurations to meet enterprise requirements

Example Use Case – Accessing OCI Databases from On-Premises

A common hybrid architecture scenario:

  • An on-premises ERP system requires secure access to OCI’s Autonomous Database
  • Dual IPSec VPN tunnels ensure redundancy during business hours
  • A FastConnect circuit handles scheduled data replication and high-volume transfers
  • DRG manages route propagation between on-prem and OCI
  • Security rules restrict traffic to necessary ports and addresses
  • Monitoring ensures availability, performance, and fault detection

This setup guarantees secure, high-performance communication while minimizing downtime and complexity.


IPSec VPN – Secure Internet-Based Connection Without Additional Charges

IPSec VPN provides encrypted communication over the public internet between your on-premises network and OCI’s Virtual Cloud Network (VCN) through the Dynamic Routing Gateway (DRG).

Key Features

✔ Uses industry-standard IPSec protocols and IKEv2 for secure tunnel establishment
✔ Supports dual tunnels for high availability (HA)
✔ No additional VPN charges — only bandwidth usage is billed
✔ Best suited for small offices, backup connections, or moderate workloads
✔ Provides encrypted communication without complex infrastructure changes

Limitations

✔ Internet variability can affect latency and throughput
✔ Not recommended for large-scale data transfers
✔ Encryption overhead may impact performance in compute-intensive environments

Setup Highlights

  1. Attach a DRG to your OCI VCN
  2. Create an IPSec connection in the OCI Console
  3. Configure customer edge routers with matching encryption settings
  4. Establish two tunnels for redundancy
  5. Monitor and troubleshoot using OCI’s tools


There are 2 tunnels Tunnel1 and tunnel 2 for redundancy purpose. you can configure the parameters accordingly for both the tunnels. First create the CPE device which has the public IP from on-prem and then attach the CPE device to the IPSec connection.

As shown in above image there 3 routing type - BGP Dynamic routing, Static routing and Policy Based routing.

BGP Dynamic Routing: Uses Border Gateway Protocol (BGP) to automatically exchange and update routes between OCI (via DRG) and customer edge routers, enabling scalable and resilient connectivity.

Static Routing: Administrator manually defines fixed routes between on-premises and OCI; simple but less flexible as changes require manual updates.

Policy-Based Routing (PBR): Routes traffic based on policies such as source, destination, or application type, allowing granular control beyond just destination IPs.

FastConnect – High-Speed Private Connectivity for Mission-Critical Workloads

FastConnect provides a private, high-bandwidth, and low-latency connection between your on-premises network and OCI, bypassing the public internet. It is ideal for performance-sensitive workloads requiring consistent bandwidth and secure communication.

Peering Types

  • Private Peering: Access OCI services like compute, block storage, or databases via private IP addresses.

  • Public Peering: Access public OCI services like Object Storage or APIs securely over Oracle’s network.

Key Benefits

✔ Dedicated link with guaranteed bandwidth
✔ Predictable, low-latency connections
✔ Supports multiple circuits and failover strategies
✔ Enables large data transfers, replication, and analytics pipelines


Configuring FastConnect in OCI begins with creating a Dynamic Routing Gateway (DRG) and attaching it to your target VCN. Next, you set up a FastConnect connection, choosing either a FastConnect Partner (via an Oracle-approved provider) or FastConnect Direct (physical cross-connect at an Oracle colocation). 

For partner connections, you configure a virtual circuit, which can be single (basic) or redundant (for high availability). With FastConnect Direct, you establish physical connectivity and map it to a virtual circuit in OCI. After provisioning, configure BGP peering between your customer edge router and the OCI router to dynamically exchange routes. 

Redundancy can be added at the device, location, or configuration level to ensure resilience. 
Finally, validate the setup with connectivity tests and monitor the circuit using OCI’s monitoring tools.


FastConnect Partner: Connect through an Oracle-approved network provider.
Single Virtual Circuit: One dedicated connection through the partner (no redundancy).
Redundant Virtual Circuits: Two independent partner circuits for high availability.

FastConnect Direct: Direct physical cross-connect to Oracle at a colocation facility. Provides maximum control, lower latency, and is ideal for enterprises with existing colocation presence.

Redundancy Models:

Location Redundancy: Two FastConnect links from different physical sites.
Single FastConnect: One connection only (entry-level, no failover).
Device Redundancy: Dual edge devices at the same location for failover.
Configuration Redundancy: Dual circuits with BGP routing policies for seamless failover.




Below table shows the comparison between IPSec VPN and Fastconnect.


Hybrid Connectivity with Other Cloud Providers

For enterprises leveraging multi-cloud strategies, OCI’s hybrid networking solutions integrate seamlessly with equivalent offerings from other major cloud providers. Currently Oracle database facility is available in all major cloud providers like Azure, Google and AWS, there are many scenario's which has database in OCI and application setup is in the other cloud provider.  

πŸ”— OCI + Azure

OCI FastConnect ↔ Azure ExpressRoute

Enables private, high-bandwidth links between OCI and Azure, allowing workloads such as analytics, disaster recovery, and secure API access across clouds.


πŸ”— OCI + AWS

OCI FastConnect ↔ AWS Direct Connect

Provides private links for data replication, backup, and distributed applications between OCI and AWS regions.


πŸ”— OCI + GCP

OCI FastConnect ↔ Google Cloud Interconnect

Offers scalable, secure connectivity between OCI and Google Cloud services, supporting data pipelines, machine learning workflows, and cross-cloud architecture.


 Multi-Cloud Use Cases

✔ Disaster recovery across clouds
✔ Secure data pipelines for analytics
✔ Low-latency connections between cloud-native services
✔ Compliance-driven architectures
✔ Cost-effective multi-cloud resource optimization


Summary

In this part of the OCI Networking Series, we explored how hybrid networking enables secure, high-performance communication between on-premises environments and Oracle Cloud Infrastructure. We covered:

✔ IPSec VPN’s role in secure, internet-based connections without additional VPN charges
✔ FastConnect’s high-bandwidth, low-latency private connectivity for mission-critical workloads
✔ DRG’s routing capabilities in managing hybrid traffic
✔ Customer edge router requirements for encryption, redundancy, and dynamic routing
✔ Practical scenarios like accessing OCI databases from on-premises
✔ A comparison of IPSec VPN vs FastConnect
✔ Multi-cloud hybrid architectures using Azure ExpressRoute, AWS Direct Connect, and Google Cloud Interconnect

By implementing these best practices, organizations can confidently extend their networks into OCI, optimize performance, and ensure business continuity.


🌐 OCI Networking Series: Part 2 – Designing and Managing VCNs

 

Objective: Deep dive into VCN architecture 


Here, will see practical guidance for creating/managing VCNs, choosing public vs private subnets, working with route tables & security lists/NSGs, CIDR sizing and planning, plus subnetting best practices.

Quick primer — what a VCN is (Virtual Cloud Network)

A Virtual Cloud Network (VCN) is your private network inside OCI — think of it as a virtual datacenter where you define IP ranges, subnets, route rules and gateways that control traffic for your cloud workloads. You can create VCNs from the Console, CLI or via IaC (Terraform/Pulumi). 


Creating & managing VCNs :- 


Console (VCN Wizard) :

The easiest way for hands-on learning is the VCN Wizard (Console) — it can create a VCN with public + private subnets, route tables, internet/NAT/service 
gateways and default security lists in a few clicks.

CLI / API / IaC :

You can create VCNs using oci network vcn create, the REST API, or via Terraform / Pulumi providers (all documented). Use CLI for automation or scripts; 
use Terraform for reproducible deployments. 

You can also use quick wizard for automated VCN deployment but its not recommnded for actual production grade purpose. Its ok for learning purpose. The above describe 2 methods mostly used for deploying network in the OCI tenancy.

Important facts to keep in mind
  • A VCN can include multiple non-overlapping IPv4 CIDR blocks. 
  • You can update a VCN to add CIDR blocks (subject to limits and constraints) but CIDR changes must follow rules (no overlap, must not include addresses used by existing subnets.


Public vs Private Subnets — when & why


Public Subnet :- 

Resources in a public subnet typically have a public IP (or assigned public IP on the VNIC) and route outbound/inbound traffic via an Internet Gateway (IGW).  Public subnets are used for load balancers, bastion hosts, public-facing web servers, etc. Only one IGW is needed per VCN (but access still depends on route rules + security rules).

Private Subnet :-

Resources in private subnets do not have public IPs. If they need outbound internet access (patching, updates), use a NAT Gateway to provide outbound-only access while blocking inbound connections from the internet. This is the recommended pattern for backend servers/databases. 

Design rules of thumb

  • Place edge-facing services (web tier, bastion) in public subnets and backend tiers (app, DB, caches) in private subnets. 
  • Control egress for private subnets via NAT or through a proxy/firewall appliance. This pattern simplifies security posture and auditing.

Route Tables: Default vs Custom (how routing works)


Default route table :- 

Each VCN automatically has a default route table. Subnets inherit the VCN’s default route table unless you explicitly assign a custom one. 

Custom route tables :- 

Use custom route tables when you need different outbound targets for different subnets (for example: public subnet → IGW; private subnet → NAT or firewall; hybrid subnet → DRG).  Create a route rule with destination CIDR and target (IGW, NAT, DRG, Service Gateway, local peering, etc.).

Best practices

Give each logical tier (web, app, db) its own route table where it makes sense — this improves clarity and reduces risk of accidental route changes affecting multiple tiers. 


In the above diagram, There are 3 subnets - one public and 2 private subnets. Each subnets have their own route table and security list. Public subnet is reaching to the internet via internet gateway, whereas private subnet B is reaching internet via NAT gateway. 

Both subnets have different purpose and different routes so better to have separate route table for each subnets. The 3rd private subnet is reaching to the object storage - it may be for backup and it get there via service gateway.

Security: Security Lists vs Network Security Groups (NSGs)

Security Lists :-

Security lists are applied at subnet level — every VNIC in the subnet is subject to the security-list rules. The default security list comes with initial stateful rules to enable things like SSH by default; you should tighten these for production.

Network Security Groups (NSGs) :-

NSGs are applied to VNICs (instance-level micro-segmentation). They act like a virtual firewall for a group of resources that share the same security posture. Use NSGs when you want finer-grained control without segregating into separate subnets.

Stateful vs Stateless

Individual security rules can be stateful (default) or stateless. Stateful rules use connection tracking so responses are automatically allowed; stateless rules require you
to allow both directions explicitly.


When to use which

  • Use security lists for broad, subnet-wide rules and NSGs for dynamic, instance-level or micro-segmentation use cases (e.g., allow only specific app servers to reach DB). 
  • You can use both — they are additive. 

    

Each security rule have the option of stateless or stateful as highlighted in above diagram. Each have separate use cases on when to use what. 

CIDR block sizing & planning strategies (do this before you launch)

VCN IPv4 CIDR block size must be in the range /16 through /30; a VCN can contain multiple non-overlapping IPv4 CIDR blocks. Regardless of number of CIDR blocks, Oracle documents an upper bound on the number of private IPs (for many tenancy configs the practical limit is ~64K addresses per VCN. Plan accordingly. 

CIDR blocks must not overlap with on-premises network CIDRs or with peered VCNs. When you add/remove CIDR blocks, follow the documented CLI/API constraints 
(order, non-overlapping, work request state).


Practical planning tips

  • Start with a /16 (10.0.0.0/16) if you anticipate many subnets/hosts; break it into /24s for tiers (10.0.1.0/24 for web, 10.0.2.0/24 for app, 10.0.3.0/24 for db). 
  • If you expect lots of VCN peering or large-scale Kubernetes/containers, plan larger or multiple VCN CIDRs.
  • Avoid RFC1918 overlap with on-prem: Coordinate with network team and reserve ranges for future peering/FastConnect/IPSec. 
  • Leave headroom: Don’t allocate every available /24 right away — leave spare subnets for scaling and for service-specific needs (monitoring, jump hosts, analytics). 


The above table shows how many addresses comes with specific CIDR notation. All IP's are not usable for the subnet as first 2 and last IP address is used for internal networking purpose per subnet.

Subnetting best practices specific to OCI

  • Use regional subnets (recommended by Oracle) — they are more flexible than AD-specific subnets and simplify distributing compute across ADs for HA. 
  • Name and tag subnets and route tables consistently: <env>-<tier>-<region>-<purpose> (e.g., prod-app-mum-private). Good naming helps automation and audits. 
  • Separate route tables for private vs public subnets — easier to reason about and reduces blast radius when updating routes. 
  • Prefer NSGs for micro-segmentation if your environment is dynamic (auto-scaling, ephemeral VMs) — NSGs move with the VNIC. Use security lists for stable, static network tiers. 

Example quick checklist before you create your production VCN

  • Confirm your CIDR plan and avoid overlap with on-prem or other VCNs. 
  • Decide regional vs AD-specific subnet strategy (Oracle recommends regional subnets for flexibility). 
  • Prepare security policy: NSGs vs Security Lists and stateful/stateless rules. 
  • Plan route tables per tier (public vs private vs hybrid). 
  • Create tagging + naming conventions so automation & audits are simple.

What's Next :- 

Hands-on: Use the OCI Free Tier to spin up a VCN via the Console wizard, create one public + one private subnet, attach an IGW and NAT, launch a compute instance in each, 
and test connectivity. 

See you in the next one!!!

How to Securely Access OCI Object Storage Using Private Endpoints

 ✍ Introduction

When using Oracle Cloud Infrastructure (OCI), securing your data and controlling how it’s accessed is essential. One way to achieve this is by using OCI Object Storage private endpoints, which ensure that your data stays within OCI’s private network without using the public internet. 

This blog explains what private endpoints are, their benefits, and how to set them up using the OCI Console. We’ll also explain how access was handled before and how private endpoints offer improved security and control.

✅ How Access Was Handled Before Using Private Endpoints

Before private endpoints were available, OCI users could access Object Storage in one of two ways:

Through a Service Gateway


A Service Gateway allows resources inside your VCN to access OCI services like Object Storage without going through the public internet. Even though the traffic doesn’t leave OCI’s cloud, it’s still routed through OCI’s shared infrastructure.

Public Buckets via the Internet


If the bucket was public or the network didn’t have a service gateway configured, applications could access Object Storage over the internet using public endpoints. This method exposes your storage to broader access risks and internet traffic.

✅ What’s Different with OCI Object Storage Private Endpoints

Private endpoints build on the idea of private access but go further by giving you full control over where traffic flows and who can access it:

  • Traffic stays within your VCN’s subnet, not OCI’s shared service infrastructure.

  • You can create custom endpoints with your own DNS prefix and namespace for easy access.

  • You decide which buckets, namespaces, and compartments are accessible, making it more secure than both service gateways and public endpoints.

  • Private endpoints offer dedicated bandwidth up to 25 Gbps, ensuring faster data transfers.

This makes private endpoints the preferred choice for organizations that want secure object storage access, cloud data privacy, and performance optimization in OCI.


In the above diagram, whoever wants to access the Object storage can access it via the vnic in the private subnet. The vnic will receive one IP from the subnet 10.3.0.0/24

✅ Limits You Should Know About Private Endpoints

OCI imposes some limits to ensure efficient management and scalability:

  • Up to 10 private endpoints per tenancy.

  • Up to 10 access targets per private endpoint.

  • Maximum bandwidth of 25 Gbps per endpoint.

These limits help maintain performance while giving you flexibility to structure your network access.

✅ How Private Endpoints Work

When you create a private endpoint, OCI:

  1. Creates a virtual network interface (VNIC) inside the chosen subnet.

  2. Sets up a custom endpoint URL using the DNS prefix and namespace you specify.

  3. Resolves the endpoint to the private IP if your DNS resolver is within the VCN or to a public IP if resolved from outside.

This ensures that your application’s access to Object Storage stays secure and under your control.

✅ How to Create a Private Endpoint (Step-by-Step)

πŸ”Ή Step 1 – Create the Private Endpoint

Enter a name, choose a unique DNS prefix, select the correct VCN and subnet.

πŸ”Ή Step 2 – Add Access Targets

Specify the namespace, compartment, and bucket. Use wildcards only when necessary.

πŸ”Ή Final Setup

OCI will create a VNIC and a custom endpoint for your Object Storage access.


 Testing the Setup

Launch a compute instance in the private subnet and test uploading/downloading files to Object Storage via the private endpoint.


✅ Best Practices

  • Use specific access targets instead of wildcards where possible.
  • Limit the number of endpoints and targets according to business needs.
  • Regularly monitor access and permissions.
  • Use OCI’s private DNS resolver for consistent private routing.
  • Follow cloud security best practices for storage networking.

✅ Conclusion

With OCI Object Storage private endpoints, you get the highest level of security and control over how data is accessed and transferred. Compared to service gateways and public buckets, private endpoints offer better isolation, performance, and compliance support. This solution aligns with modern cloud security strategies and helps organizations keep their data safe while optimizing network efficiency.

OCI Networking Series: Part 1 – Basics of VCN, Subnets & Gateways



🎯 Objective: Laying the Foundation

Networking is the backbone of every cloud deployment. In Oracle Cloud Infrastructure (OCI), it determines how your applications communicate securely, efficiently, and reliably across regions, data centers, and the internet.

This blog sets the foundation for our series by introducing the core networking concepts, building blocks, and design principles in OCI. Whether you are preparing for the OCI Networking Professional certification or designing enterprise-grade architectures, understanding these fundamentals is key.

🌍 What are Regions and Availability Domains?

In OCI, resources are organized geographically and logically to deliver high availability and fault tolerance:

Region → A localized geographic area such as Mumbai, Ashburn, or Frankfurt. Each region contains one or more Availability Domains (ADs).

Availability Domain (AD) → An isolated data center within a region with independent power and cooling. Multiple ADs ensure resiliency against data center failures.

Fault Domain (FD) → A logical grouping within an AD, similar to racks, to spread workloads for rack-level protection.



πŸš€ Why Networking is the Backbone of OCI Deployments


Every workload in OCI — whether a database, containerized app, or analytics pipeline — relies on networking for:

Connectivity → Enabling secure communication within and outside OCI.
Scalability → Handling increasing workloads across subnets, regions, and clouds.
Security → Controlling inbound and outbound traffic with precise policies.
High Availability → Ensuring redundant paths and fault domain isolation.



🧩 Key OCI Networking Building Blocks


1. Virtual Cloud Network (VCN) :- 
Think of a VCN as your private data center in the cloud. Fully customizable with your own CIDR blocks. It can span all Availability Domains in a region.

2. Subnets :-
Logical subdivisions of a VCN. Public subnets → Resources with public IPs accessible via Internet Gateway. Private subnets → Resources with private IPs only, usually backend systems. Subnets are regional, but can be AD-specific depends on requirement and design.

3. Route Tables :- 
Define how traffic leaves a subnet.
Common targets:  Internet Gateway (IGW), NAT Gateway, Service Gateway, DRG (Dynamic Routing Gateway)

4. Security Lists & Network Security Groups (NSGs) :- 
Security Lists: Operate at subnet level, like traditional firewalls.
NSGs: Operate at VNIC/instance level, offering micro-segmentation.
Both define stateless/stateful ingress & egress rules.

5. Dynamic Routing Gateway (DRG) :-
The bridge between your VCN and external networks. Supports IPSec VPN, FastConnect, and VCN-to-VCN connectivity. Enables hybrid and multi-VCN architectures.

6. Internet Gateway (IGW) :-
Provides bi-directional connectivity between VCN and internet. Required for public-facing workloads such as web servers.

7. NAT Gateway :- 
Provides outbound internet access for private subnet resources. No inbound connections allowed, ensuring stronger security. Ideal for patching/updates of backend servers.

8. Service Gateway :-
Enables private access to Oracle Services Network (OSN) such as Object Storage. Keeps traffic inside Oracle’s private backbone (never traverses public internet).


The below diagram shows the gateways in OCI.




πŸ—Ί️ Regional vs AD-Specific Resources


Understanding whether a resource is regional or AD-specific is crucial when planning networking and workloads.

πŸ”Ή Regional Resources (span the whole region)

VCN
Subnets
DRG, LPG
IGW, NAT Gateway, Service Gateway
Route Tables, Security Lists, NSGs
Load Balancers (regional by default)


πŸ”Ή AD-Specific Resources (bound to a single Availability Domain)

Compute Instances (VMs, Bare Metal, GPU)
Block Volumes (though they can be backed up/replicated across ADs)
File Storage Systems (FSS)
Exadata & other dedicated infrastructure services


πŸ” Shared Responsibility Model for OCI Networking


Security and networking in OCI follow a shared responsibility model:

Oracle Responsibility: Secure the physical network, backbone, and global edge infrastructure.

Customer Responsibility: Designing VCNs, configuring gateways, defining firewall rules (NSGs/Security Lists), and managing routing.


🏁 Conclusion – Get Hands-On with OCI Networking

Understanding the theory is only the first step. The real learning begins when you start building and experimenting in your own tenancy. The good news is that most of the networking services in OCI can be explored using the Always Free Tier.

Here’s what you can try:

  • Create your first VCN (manual not by wizard) with both public and private subnets.

  • Attach an Internet Gateway and launch a small Compute instance in a public subnet → test access via SSH.

  • Use a NAT Gateway for a private subnet VM → verify outbound internet access without exposing a public IP.

  • Connect to Object Storage privately using a Service Gateway.

  • Experiment with NSGs vs Security Lists → try controlling access to your compute instance with different firewall rules.

  • Explore the OCI Console VCN Wizard → it auto-provisions a VCN, subnets, route tables, and gateways in minutes.

You can check Oracle documentation for more step by step approach in practice.

By the end of this hands-on practice, you’ll have a working network in OCI and a solid understanding of how traffic flows between your resources. 






How I Passed the Certified Kubernetes Administrator (CKA) Exam in 2025 – From Zero to Certified

Becoming a Certified Kubernetes Administrator (CKA) was never on my roadmap—until curiosity met opportunity. With a background in Oracle databases, enterprise applications, and cloud technologies, I had little to no hands-on exposure to Docker or Kubernetes. But once I dipped my toes in, there was no turning back.

This blog post is for anyone considering the CKA exam — especially those from non-container backgrounds — to show that it's achievable with the right plan, mindset, and practice.



πŸ“Œ CKA Exam Overview (2025 Version)

Before diving into the preparation, let’s quickly understand what the exam entails:

Duration: 2 hours
Format: Online, performance-based (100% hands-on tasks)
Passing Score: 66%
Number of Questions: Around 15–20 practical questions (weighted)
Documentation: Full access to Kubernetes official documentation during the exam
Exam Environment: Conducted on PSI Secure Browser (check system compatibility in advance)

πŸ› ️ My Starting Point: Zero Container Experience


Coming from a background in Oracle databases, enterprise applications, and cloud infrastructure, I had developed a strong foundation in systems and architecture. However, I hadn’t yet had the opportunity to work directly with Docker or Kubernetes in my previous roles — they simply hadn’t intersected with the projects I was part of. That changed when I made a conscious decision to explore containerization and Kubernetes, recognizing its growing importance in modern infrastructure and DevOps practices.

🧱 Building the Foundation

Here’s how I structured my preparation, step by step.

1️⃣ Learn Linux Basics

Thanks to my previous experience, I already had a working knowledge of Linux environments. However, for those starting out, I highly recommend strengthening your grip on the following areas:

  • Basic shell commands and scripting
  • File system navigation and permissions
  • Process and service management (ps, top, kill, systemctl)
  • System monitoring (df, free, uptime, vmstat)
  • Search and filter commands (grep, find, awk, cut)
  • Proficiency with vi or vim editor
While Kubernetes knowledge is the focus, Linux familiarity is what enables smooth troubleshooting and faster execution during the exam.

2️⃣ Understand Docker Fundamentals

Containers are the building blocks of Kubernetes. I took a beginner-level Docker course to understand:
  • Images vs Containers
  • Volumes
  • Networking
  • Dockerfile basics
  • Common CLI commands (docker run, docker build, docker exec etc.)

πŸ“˜ Kubernetes Basics – Get Comfortable with Core Concepts

To build a strong initial understanding, I found the “Kubernetes for Beginners” course by KodeKloud to be quite helpful. It’s a good starting point for:
  • Visualizing how Kubernetes manages containers
  • Interacting with clusters using kubectl
  • Understanding key objects like Pods, Deployments, Services, Namespaces
Even if you're already familiar with cloud infrastructure, this course helps bridge the mental model between traditional workloads and Kubernetes-native thinking.

πŸ“š Deep-Dive with Books

Once I was comfortable with the basics, I explored advanced concepts using these great reads:

Nigel Poulton – The Kubernetes Book (2025 Edition)
Clear, concise, and updated for 2025 — a great starting point to demystify core concepts.
Benjamin Muschko – Certified Kubernetes Administrator (CKA) Study Guide
Tailored for the exam. Deeply technical, with exam-aligned objectives and walkthroughs.
Kubernetes Up & Running
A bit more theory-heavy but excellent for building a well-rounded understanding of Kubernetes use cases and architecture.

πŸ§ͺ Structured Practice – The Real Game Changer

Once I had a solid grip on the theoretical concepts, it became clear that the real challenge of the CKA exam lies in doing, not just knowing. Reading books and watching tutorials gave me the "why" — but I needed the "how" through hands-on practice.

To bridge that gap, I enrolled in a certification-focused course that offered:

  • Lab environments for every exam topic
  • Task-based scenarios that mirrored the actual CKA exam

  • Mock exams to simulate time pressure and multi-tasking

  • Best practices for fast and efficient kubectl usage

In particular, Mumshad Mannambeth’s CKA course stood out for its depth and structure. The interactive labs were incredibly close to real exam questions and helped reinforce troubleshooting skills in live cluster environments.

If you're planning to take the CKA, I highly recommend choosing a course that emphasizes hands-on labs — it's the single most effective way to build confidence and speed before exam day.

πŸ” Pro Tip: Efficient Documentation Navigation

Yes, you get access to Kubernetes documentation during the exam, but searching smartly saves time.
  • Use specific search terms:
  • site:kubernetes.io create deployment
  • site:kubernetes.io persistent volume reclaimPolicy
  • Use keyboard search on page (Ctrl + F ):
  • Quickly jump to what you need with queries like kind: Deployment, spec: containers, etc.

πŸ§ͺ Simulate the Real Exam with Killer.sh

Once you feel confident with concepts and labs, test yourself with Killer.sh, the official exam simulator.

You get 2 full sessions, each valid for 36 hours
Each session includes 17 realistic and tricky practice questions
It tests your speed, accuracy, and stress handling under time pressure
Repeat the simulator until you consistently finish all questions within 2 hours.

πŸ’» Don’t Ignore the Exam Setup

Here are some critical final-day tips:

✅ Test the PSI Secure Browser in advance — I never had issues, but many candidates report problems. Don’t leave it to the last minute.
✅ Check webcam, mic, and internet stability
✅ Familiarize yourself with the test UI — scrolling, tabs, switching between terminals
✅ Keep your ID ready and ensure your test environment meets the proctor’s requirements

⏳ Time Is Your Biggest Opponent

Even if you know the answers, time management is key. 
Some tips:
  • Don’t get stuck on a single question
  • Flag questions for review if they need more time
  • Knock out the easier questions first to build momentum
  • Keep an eye on the question weights (not all are equal)

πŸ’ͺFinal Thoughts – You’re More Ready Than You Think If you’ve:

➤ Built a solid foundation (Linux + Docker)
➤ Understood the core Kubernetes concepts
➤ Practiced using KodeKloud labs
➤ Completed Killer.sh sessions
➤ Navigated the docs efficiently

Preparing for the CKA exam has been both technically enriching and personally rewarding. It’s more than a certification — it’s a validation of hands-on skills and problem-solving under time pressure. 

If you're planning to pursue it, a structured approach with consistent practice will absolutely get you there.

πŸ”œ That’s all for now. See you in the next one!


Part 1 - From Zero to K8s: Understanding Kubernetes and Its Architecture

Kubernetes Architecture and Its components

In my previous blog, I walked you through how to provision a Kubernetes cluster using Oracle Kubernetes Engine (OKE) on Oracle Cloud. But before we go deeper into working with Kubernetes, it’s important to understand what Kubernetes actually is and how it works behind the scenes.

In this blog, we’ll explore the core architecture of Kubernetes, break down the master and worker node components, and build a strong foundation to help you confidently move forward with hands-on deployments.

🌐 What is a Kubernetes Cluster?

A Kubernetes cluster is a set of nodes (machines) that run containerized applications. It is the foundation on which your Kubernetes architecture operates. The cluster is responsible for the deployment, scaling, and management of your containers across a network of machines.

Key Elements of a Kubernetes Cluster:

  • Master Node: This is the control plane of the cluster. It manages the cluster’s overall state, including scheduling, scaling, and deploying applications. (More on this in the next section!)

  • Worker Nodes: These are the machines where your applications run. Each worker node contains the necessary components to run containers, including the kubelet, kube-proxy, and container runtime.

A Kubernetes cluster operates in a highly automated manner, ensuring resilience and scalability by managing application lifecycles and resources effectively.

🧠 What is Kubernetes?
Kubernetes, often abbreviated as K8s, is an open-source platform designed to automate the deployment, scaling, and management of containerized applications.
Originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes has become the de facto standard for orchestrating containers in production environments.

🧩 Why Kubernetes?
  • Automated deployment and scaling of containers
  • Self-healing: Restarts failed containers, replaces and reschedules them when nodes die
  • Service discovery and load balancing
  • Infrastructure abstraction: Run apps the same way whether on-prem, in the cloud, or hybrid
  • Rollouts and rollbacks for updates with minimal downtime

Understanding the architecture of Kubernetes is fundamental before jumping into cluster setup or workload management. Kubernetes uses a master-worker model 
to orchestrate containerized applications at scale.

High-Level Architecture Overview
A Kubernetes cluster consists of:
  • Master Node (Control Plane): Controls and manages the cluster.
  • Worker Nodes: Run the actual application workloads in containers.
Think of the control plane as the brain 🧠 and the worker nodes as the muscles πŸ’ͺ executing tasks.



🎯 Master Node (Control Plane) Components
These components make decisions about the cluster (scheduling, responding to events, etc.).

kube-apiserver:-  Acts as the front door to the Kubernetes cluster. It handles all external and internal requests and is the central communication hub for all other components.
  • Frontend of the control plane
  • Receives REST API calls (via kubectl or CI/CD pipelines)
  • Authenticates, validates, and processes requests

etcd:-  A key-value store that holds the cluster’s entire state — like a database for Kubernetes configuration, secrets, and metadata. It ensures consistency and persistence.
  • Consistent, distributed key-value store
  • Stores all cluster data (config, state, secrets, etc.)

kube-scheduler:- 
 Assigns newly created pods to the most suitable node based on available resources, constraints, and scheduling rules.

kube-controller-manager:- Runs background processes (controllers) that continuously check the desired state vs. the current state and take action to keep things in sync.
Runs various controllers:
  • Node controller (notices and responds when nodes go down)
  • Replication controller (ensures the desired number of pod replicas)
  • Endpoint controller, namespace controller, etc.
  • cloud-controller-manager (optional, cloud setups) Integrates with cloud APIs for managing load balancers, storage, etc.

🎯 Worker Node Components
These components actually run your containers (apps, services, workloads).

✅ kubelet:- An agent that runs on every worker node. It receives instructions from the API server and ensures that the specified containers are running and healthy.
  • Agent that runs on each node
  • Registers the node with the API server
  • Ensures containers are running as expected

✅ kube-proxy:- Handles network communication and routing within the cluster. It manages access to services and ensures that traffic is directed correctly to the right pods.
  • Maintains network rules on nodes
  • Forwards traffic to the right pod using iptables/IPVS

✅ Container runtime:- Responsible for pulling container images and starting/stopping containers on the node. Kubernetes supports multiple container runtimes.

Software that runs containers (e.g., containerd, CRI-O, Docker)

🎯How They Work Together
Here’s a typical flow:

You run kubectl apply -f pod.yaml.
kube-apiserver receives the request and stores the desired state in etcd.
kube-scheduler finds the right node.
kubelet on the selected worker node pulls the container image and starts the pod.
kube-proxy ensures it can receive traffic if needed.

🎯Summary Table
Below is the table which describes each component and its location on master or worker node.










🏁 Conclusion

Understanding the Kubernetes architecture is the first step toward mastering how container orchestration works. In this post, we covered what Kubernetes is, what a cluster looks like, and the key roles played by the control plane (master node) and worker nodes. These core components work together to keep your containerized applications running reliably and at scale.

In the next part of this series, we’ll dive into core Kubernetes concepts like pods, deployments, namespaces, and labels