- May 4, 2021
- 3,033
TYPE | Infrastructure |
CLIENT | Thailand |
PROJECT | People's Data Center Network (PDCN) |
PROJECT COST | 2,485,000,000.00 |
COMPLETION DATE | 02/06/2025 |
PROJECT INFORMATION | People’s Data Center Network (PDCN) Project Title: People’s Data Center Network (PDCN) Project Code: TH-PDCN-001 Date of Approval: September 25, 2005 Version: 1.0 Approval and Endorsement Authorities
Project Overview
As the Socialist Republic of Thailand emerges as the first nation-state in history to fully transition to an integrated electronic and digital governance framework, marked by the successful deployment of the Prachathipatai E-Government Superapp, the nation's participatory democratic institutions have articulated an urgent need for strategic investment in advanced data center infrastructure. These data centers are vital to supporting the increasing demands of database management, cloud computing, and automated decision-making technologies integral to modern governance and economic operations. Following an extensive three-month deliberative process, the national government has received an unequivocal democratic mandate to initiate the People’s Data Center Network (PDCN) project—a landmark initiative poised to redefine the nexus of governance, economic innovation, and societal advancement in the digital age. Following a modular and scalable design that allows for future expansion without disruption, the data center adopts a single-story layout with the option for vertical stacking of server racks if additional density is needed in the future. Heavy-duty materials such as reinforced concrete, steel, and blast-resistant glass are used to ensure both structural integrity and security. The data center would cover approximately 3,000 square meters for a primary facility with an additional mirrored site for disaster recovery. The building has a rectangular shape and has the roof height of 12 meters to accommodate cooling systems and air handling units. The perimeter of the data center will be secured with chain-link fences with razor wire at the top, and integrated motion sensors are installed on the fence. There will be ta vehicle barricade in front of the perimeter to ramming or unauthorized vehicle entry. Security gates will employ biometric and RFID access control with 24/7 monitoring system. The main entrance is designed for vehicle security, with multiple checkpoints for trucks delivering hardware or supplies. Secondary emergency entrance is available for limited, non-vehicular access, secured with automated bollards and emergency response teams stationed nearby. CCTV cameras are positioned around the building perimeter and within the facility, including thermal and infrared cameras for night surveillance. There will also be 360-degree monitoring at entry and exit points by both personnel and surveillance system. Intruder Detection Systems (IDS) will be integrated into the building’s perimeter with motion detectors and pressure sensors on the ground to detect unusual activity. There is a dedicated mobile security units of 30 armed personnel, including 5 guard dogs, stationed to secure sensitive areas. The data sensor employs biometric scanners at all entry points for authorized personnel (fingerprint/iris recognition) with RFID badges for different access levels. There will be mantrap doors at key entry points to prevent unauthorized access. The facility’s building zone will be clearly delineated into zones for public (visitor entry), administrative, and restricted areas (server and critical systems). Each zone is equipped with motion sensors and alarms to detect unauthorized movement. The server rooms feature rows of high-density server racks in a hot aisle/cold aisle configuration for efficient cooling. The hot aisle will be used for active memory stored in Solid-State Drives (SSDs) while the cold aisle will be used for archival memory stored in Hard Disk Drives (HDDs). Each server rack can hold up to 40 servers, with aisles between racks for personnel access and airflow. The data center will house 4,500 servers, with each having Indigenous and latest multi-core processors, 256 GB ingenious and latest RAM, and 100 TB of storage per unit (combining SSD for high-speed access and HDD for archival). The data center will be built on raised flooring of 80 cm height for cable management and air circulation. It will use anti-static flooring to reduce dust and static charge risks. In-row cooling units are positioned between server racks for localized cooling. CRAC units are also placed strategically for efficient heat dissipation. There will be hot aisle containment to separate hot and cold air for energy efficiency. RAID Configuration will be used for redundancy and speed, such as RAID 6 for critical storage to tolerate multiple drive failures. Storage Area Network (SAN) is employed for centralized management. Indigenous and modern server virtualization software will be installed, including containerization (Docker) for application scaling. The database systems utilize SQL databases for structured data (PostgreSQL, MySQL) and NoSQL databases for unstructured data (MongoDB, Cassandra). For middleware, the data center will use Service-Oriented Architecture (SOA) and message queues (RabbitMQ, Kafka) for inter-service communication. Multi-layered encryption (TLS/SSL, AES-256) is implemented alongside Identity and Access Management (IAM) for citizen authentication. Intrusion Detection and Prevention Systems (IDPS) is also utilized alongside NSST 1.0 Architecture-grade security system and framework. There is a Network Operations Center (NOC), centrally located in the data center, visible for monitoring all data center operations. There will be multiple workstations for network engineers and large display screens showing real-time metrics of server health, power usage, cooling efficiency, and network traffic. Redundant communication systems (internet, private lines) are implemented in case of outages. The data center utilizes high-speed fiber-optic connections (20 Gbps backbone) with redundant networking equipment (core, aggregation, and access layers) to avoid single points of failure. Load balancers are also utilized to distribute traffic evenly. The data center will be powered by renewable energy harvested from solar rooftops atop the building and from national renewable energy and smart grid. The power rooms are located in a separate secure zone at the back of the building. There will be UPS (Uninterruptible Power Supply) systems for immediate power during outages, backup generators (diesel or natural gas) located outside the building, with quick access to fuel supplies in fuel storages for extended backup capacity, enclosed in fireproof containers. The cooling rooms will be adjacent to server rooms but isolated to avoid any heat crossover. Cooling units utilize a combination of Chilled Beam Systems and Water Cooled Chillers for maximum efficiency. A cooling tower is located on the building's roof or outside to provide large-scale cooling. Modern and indigenous environment control system is installed to maintain temperature at around 18-22°C (64-72°F) while humidity is managed at 45%-50% to minimize corrosion and static electricity build-up. Overhead and under-floor air distribution systems are used to direct cool air to server intakes and remove hot air from exhausts. There is also rain harvesting system for cooling purposes, reducing municipal water dependence. Pre-action sprinkler system combined with clean agent fire suppression (e.g., FM-200) are installed in server rooms. Smoke detection is integrated into the HVAC system for early fire detection. There will be clearly marked and accessible emergency exits for personnel safety in case of fire or evacuation. Emergency lighting is powered by backup generators to remain operational during power outages. A visitor reception is located at the front of the building with proper access control to prevent unauthorized entry into restricted zones. Lobby with security desk come equipped with biometric or RFID entry systems. The admin office is located away from critical infrastructure to minimize risk, which includes offices for the data center management team, IT staff, and technical engineers. There are break room and rest areas for the staff working long shifts, to improve productivity and well-being. The PDCN will be composed of five identical data centers situated in Kalasin, Chachoengsao, Phetchabun, Lamphun, and Krabi |
ENCRYPTED | No |