Devops

深入理解 Nginx 讀書筆記 (第一部份)

為什麼選擇 Nginx

  1. 更快: 1)單次請求更快響應 2) 在高峰期比其他服務器更快響應
  2. 高擴展性: 1)由耦合度極低模塊組成 2)模塊皆嵌入到2進制文件中執行
  3. 高可靠性: 1)模塊穩定 2)進程相對獨立 3)worker出錯可快速輪替
  4. 低內存消耗: 1)10,000個非活躍 HTTP Keep-Alive 連接僅消耗 2.5 MB
  5. 高併發: 1)單機支援 100,000 以上連接
  6. 熱部署: 1)基於 master 與 worker 進程分離 2)服務不間斷下,進行升級可執行元件、配置及更換日誌
  7. BSD 許可協議

開發準備工作

必要

  1. Linux 內核版本 2.6 以上 (須靠 epoll 處理高併發)
  2. GCC 編譯器編譯 C 語言

非必要

  1. G++,用來編譯 C++ 以編寫 HTTP 模塊
  2. PCRE(Perl 兼容正則表達式),用來在配置文件中使用正則表達式,pcre-devel 是使用 PCRE 做二次開發所需
  3. zlib, 用來對 HTTP 內容做 gzip 壓縮,減少網路傳輸量
  4. OpenSSL,支持 SSL 協議,或想使用 MD5 或 SHA 雜湊

目錄結構

  1. 源代碼目錄
  2. 編譯中間文件(置於源碼目錄底下,命名為objs)
  3. 部署目錄(莫認為 /usr/local/nginx)
  4. 日誌目錄

Linux 內核參數優化

  1. 須要修改內核參數,使得 Nginx 可以擁有更高的性能
  2. 通常根據業務特性進行調整,作為內容服務器、反向代理,或是提供縮圖用的服務器,會做不同調整

通用高併發 /etc/sysctl.cnf 設置如下,執行 systcl -p 套用:

每個TCP連接都會為了維持滑動窗口而消耗內存(滑動窗口,接收方通過通告發送方自己的窗口大小,從而控制發送方的發送速度),參數 rmem_default/wmem_default/rmem_max/wmem_max 須要衡量物理內存的總大小及最大連接數量 (workerprocess * workerconnections) 來決定,滑動窗口過大可能導致 OOM,過小會影響大數據量的傳輸速度

編譯安裝 Nginx

下載源碼後,最簡單的安裝方式為執行以下3行指令:

  1. ./configure: 1) 檢測系統內核及軟件 2) 解析參數 3) 生成中間目錄 4) 根據參數生成 C 源碼文件及 Makefile 文件
  2. make: 1)使用生成的 Makefile 進行編譯 2) 生成目標文件及最終2進制文件
  3. make install: 1) 根據參數將 nginx 部署到指定的安裝目錄 2) 複製二進制文件及配置文件

補充

以下是我練習使用的 dockerfile,按照書上指式的步驟在 centos 系統上安裝 nginx:

你可以在目錄下執行以下指令來進入安裝 nginx 後的環境

編譯完後檢視編譯中間文件

安裝完後檢視部署目錄

命令行控制

默認方式啟動二進制程序,不指定 -c 路徑的情況下,會套用默認的配置文件

檢視程序

關閉服務,使用 quit 參數則等待當前請求處理完畢才停止

運行中重新加載配置,指令會先檢查配置是否有誤,若通過則以 quit 方式重啟

滾動升級 Nginx

  1. 發送訊號通知舊版服務
  2. 啟動新版本服務
  3. 以 quit 方式結束舊服務

HBase Basics

  • Devops

Apache HBase is an open source, scalable, consistent, low latency, random access data store

Source from Infinite Skills

Features

Horizontally Scalable

Linear increase in servers results in linear increases in storage capacity and I/O operations

image

CAP Trade off

In CAP theory, Hbase is more likely a CP type of system

  • Consistency: ACID(atomicity, consistency, isolation, durability) garantees on rows
  • Availability: Response time 2-3ms from cache, 10-20ms from disk
  • Partition Tolerance: Failures don’t block system. It might take longer to response to maintain consistency

Dependencies

Apache ZooKeeper

  • Use for distributed coordination of leaders for high availability
  • Optimized to be highly avaiable for reads
  • Not designed to scale for high write throughput

Apache Hadoop HDFS

  • Provide data durability and reliability
  • Optimized for sequential reads and writes of large files
  • Does not provide random updates, only simple API for rando reads
  • Cannot scale tens of billions of small entities (less then a few hundred MB)

Both system have their strengths but do not individually provide the same properties as HBase

Random Access

Optimized for small random reads

  • Entities indexed for efficient random reads

Optimized for high throughput random writes

  • Updates without requiring read
  • Random writes via Log Structured Merge (LSM)

Short History

Inspired from Google’s Bigtable

Bigtable: A Distributed Storage System for Structured Data(2006)

BigTable

Datastore for Google’s Web Crawl Table

  • Store web page content
  • Web URL as key
  • Use MapReduce to find links and generate backlinks
  • Calculate page rank to build the Google index

Later, it also used as backend for Gmail, GA, Google Earth etc.

Hadoop HDFS

Inspired by Google distributed file system GFS

Timeline

Since 2009, many compaies (Yahoo, Facebook, eBay etc.) chose to use HBase for large scale production use case

In 2015, Google announced BigTable with HBase 1.0 compatible API support for its compute engine users

2017, HBase 2.0.0

2020, HBase 3.0.0

Despite being bucketed into NoSQL category of data storage, some of intresting are moving NoSQL back to SQL, by using HBase as a storage engine for SQL compliant OLTP database system.

Use case

HBase’s strengths are its ability to scale and sustain high write throughputs

Many HBase apps are:

  • Ports from RDBMS to HBase
  • New low-latency big data apps

How to Porting RDBMS to HBase?

  • Many RDBMS are painful to scale
  • Scale up is no longer pratical for massive data
  • Data inconsistency was not acceptable when scaling reads
  • Operationally gets more complicated as the number of replicas increases
  • Operational techniques not sufficient when scaling writes

To make it easier to scale, we need to discard the fundamental features that RDBMS provides, such as:

  • text search (LIKE)
  • joins
  • foreign keys and avoid constraint checks

Changing the schema, make it only contains denormalized tables, we won’t incur replication IO when sharding the RDBMS

Now you’re relatively straightforward porting RDBMS to HBase

Why choosing HBase instead?

  • When your apps need high wirte and read throughput
  • When you tired of RDMS’s fragile scaling operations

Data Volumes

  • Entity data: information about the current state of a particular persion or thing
  • Event data(or time series data): Records events that are generally spaced over many time intervals

Data volume explods when we need both of them

HBase or Not

Q: Does your app expect new data to be vailable immediately after an update?

  • Yes: Use HBase
    • When data queried, must reflect the most recent values
    • Expect query responses in milliseconds
  • No: No need for HBase

Q: Whether your app analytical or operational?

  • Analytical: Not optimal for HBase
    • Look for large set of data
    • Often filter for particular time range
    • Better choose Hadoop
  • Operational: Use HBase
    • Look for single or small set of entities

Q: Does your app expect updates to be available immediately after an update?

  • Yes: Use HBase
    • Frequently modified
    • Pinpoint deletes
    • Updates must be reflected within milliseconds
  • No: No need for HBase
    • Data is append-only
    • Deletes in bulk or never
    • Updates can be ignored until the next report is run

comparison

Workload HBase Hadoop
Low Latency 1ms from cache 10ms from disk 1min vis MR/Spark 1s via Impala
Random Read Rowkey is primary index The small file problem
Short Scan Sorted and efficient Bespoke partitioning can help
Full Scan Possible but non-optimal Improved pref w/MR on snapshots Optimized with MR, Hive, Impala
Updates Optimized Not supported

Read More »HBase Basics

Kubernetes Short Notes(4)

  • Devops

Storage

Persistent Volume

Except storing a volume on the host, Kubernetes provide serveral type of storage solutions

  • NFS
  • GlusterFS
  • Flocker
  • Ceph
  • ScaleIO
  • AWS EBS
  • Azure Disk
  • Google Persistent Disk

Persistent Volume Claim

Administrators creates PV, and users creates PVC to use the PV, during the binding process Kubernetes tries to find the PV that has sufficient capacity as requested by the claim and any other request properties such as access modes, volume modes, storage class and selector

Note that a smaller claim may get bound to a larger volume if all the other criteria matches and there are no better options

There is a one to one relationship bewteen PV and PVC, no other claims can utilize the remaining capacity in the volume

Configure the field persistentVolumeReclaimPolicy to defined what action to perform to PV after a PVC deleted.

  • Retain (default)
  • Delete
  • Recycle

Networking

Networking for Linux Basics

Network Switch

A switch is a device in a computer network that connects other devices together, can only enable a communication within a network

Host A(192.168.1.10)[eth0] ↔ Switch(192.168.1.0) ↔ [eth0]Host B(192.168.1.11)

Router

A router is a device/service that provides the function of routing IP packets between networks

Switch(192.168.1.0) <–> [192.168.1.1]Router[192.168.2.1] <–> Switch(192.168.2.0)

Route/Gateway

A gateway (in network terms) is a router that describes the function for connectivity

Default Gateway

If none of these forwarding rules in the routing table is appropriate for a given destination address, the default gateway is chosen as the default router of last resort

Forwording packets between interfaces

By default in linux, packets are not forwarded from one interface to the next, for security reasons

Explicity allow it

Persists the settings

DNS

Translate host name to IP address by configure the /etc/hosts

When a environment has too many entries and IP address are not persistent, we need a DNS server

The host will lookup an entry in /etc/hosts first, then lookup in the DNS. This order can be changed by configure file /etc/nsswitch.conf

You can configure the DNS server to forward unknown host name to the public name server in the Internet, for example reach www.google.com

private DNS → Root DNS → .com DNS → google DNS → cache the result

When looking for a host in the same domain, we want to simple use the host name not the full name, such as using web not web.mycompany.com, therefore we specify the domain name you want to append in /etc/resolv.conf

There are records stores in DNS with specific types:

  • A: ipv4
  • AAAA: ipv6
  • CNAME: name to name mapping

You can use tools like nslookup, dig to debug, note that nslookup only query from dns, not files

There are plenty DNS solutions, such as CoreDNS, except configure from files, CoreDNS supports other ways of configuring DNS entries through plugins like kubernetes

Network Namespace

A namespace is a way of scoping a particular set of identifiers

Linux provides namespaces for networking and processes, if a process is running within a process namespace, it can only see and communicate with other processes in the same namespace

Linux starts up with a default network namespace

Each network namespace has its own routing table and has its own set of iptables

Connect namespaces together using a virtual Ethernet pair (or virtual cable, pipe)

When there more of namespaces need connected, use a virtial switch to create a virtial network. There few solutions:

  • Linux Bridge
  • Open vSwitch

image

When a private virtual network need to reach the outer network, it need a gateway, the host is the gateway

For destination network to response, enable NAT on host acting as a gateway.

Add a new rule in the NAT IP table in the POSTROUTING chain to masquerade or replace the from address on all packets coming from the source network 192.168.15.0 with its own IP address.

Thus anyone receiving these packets outside the network will think that they are coming from the host and not from within the namespaces

Add a route using default gateway to outside world

For outside world to reach the namespace in a private network, add a port forwarding rule using IP tables to say any traffic coming to port 80 on the localhost is to be forwarded to port 80 on the IP assigned to the namespace

Read More »Kubernetes Short Notes(4)

Kubernetes Short Notes(3)

  • Devops

Cluster Maintainance

OS Upgrade

Pod Eviction Timeout

When the nodes was down for more than 5 minute(default) then the pods are terminated; pod will recreate if has replicaset

Drain, Cordon, Uncordon

We’re not sure the node will come back online in 5 minutes, therefore we can drain the node.

After the drained node upgraded and come back, it still unschedulable, uncordon the node to make it schedulable.

Note that the previouse pods won’t be automatically reschedule back to the nodes.

Cluster Upgrade

The core control plane components’s version can be different, but should follow certain rules:

  • the kube-api is the primary component, none of the other components’s version must not be higher than the kube-api
  • the components can be lower in 1-2 versions
    • kube-api: x
    • Controlloer-manager, kube-scheduler: x, x-1
    • kubelet, kube-proxy: x, x-1, x-2
  • the kubectl can be one version higher than kube-api: x+1, x, x-1

The kubernetes support only up to the recent 3 minor versions. The recommanded approch is to update one minor version at a time.

Update the cluster depend on how you deploy them:

  • cloud provider: few clicks at the UI
  • kubeadm: using upgrade argument (you should upgrade the kubeadm first!)
  • the hard way from scratch: manually upgrade components by yourself

Two major steps:

  1. upgrade master node, the control plane componets goes down, all management function are down, only the applications deploy on worker nodes keeps serving
  2. update worker nodes, with strategies:
    • upgrade all at once with downtimes
    • upgrade one at a time
    • create new nodes and remove the workloads, then finally remove old nodes

When you run command like kubectl get nodes, the VERSION is indicat the version of the kubelet

Backup and Restore

Master / Node DR

  • Cordon & drain
  • Provision replacement master / node

ETCD DR

Option: Backup resources

Saving objects as a copy by query the kube-api

Option: Backup ETCD

Making copies of the ETCD data directory

Or use the etcd command line tool

  1. Make a snap shot

    Remember to specify the certification files for authentication
  2. Stop kube-api
  3. Restore snapshot

    When ETCD restore from a backup, it initialize a new cluster configuration and configures the members of ETCD as new members to a new cluster. This is to prevent a new member from accidentally joining an existing cluster.
    For example, using a snapshot to provision a new etcd-cluster from testing purpose. You don’t want the members in the new test cluster to accidentally join the production cluster.

  4. Configure the etcd.service with new data directory and new cluster token

    During a restore, you must provide a new cluster token and the same initial cluster configuration

  5. Restart ETCD service
  6. Start kube-api

Persistant Volume DR

You can’t relay on kubernetes to for backing up and restore persistant volumes.

If you’re using cloud provider specific persistant volumes like EBS volumes, Azure managed disks or GCE persistent disks, you should use cloud provider snapshot APIs

Read More »Kubernetes Short Notes(3)

Kubernetes Short Notes (2)

  • Devops

Scheduling

Manual Scheduling

  • Bind the pod to node by nodeName property, before that, the pod stays in the Pending state

  • Manutal ways to bind:

    • specify the spec.nodeName, not updatable

    • create the Binding object

Labeling

Use to group and select the objects, for example a ReplicaSet object configs:

  • metadata.labels sets the ReplicaSet itself
  • spec.template.metadata.lables sets the Pod
  • spec.selector.matchLabels defines how ReplicaSet to discover the Pod

Annotations

Use to record other details for intergration purpose e.g. build info, contact details

Restriction

Taint/Toleration

Limit pods without tolerations cannot get scheduled to a tainted node

  • Taint the nodes

  • Set the pods’ tolerance, three behavior are avaliable if not tolerant:

    • NoSchedule

  • PreferNoSchedule: not guaranteed
  • NoExecute: new pods=NoSchedule, existed pods=evicted

Note the value in tolerations keys must use double quotes

Node Selector

Limit the pod to get scheduled to one kind of node only

  • Lable the node
  • Set the nodeSelector

Note there is no OR or NOT conditions, use node affinity instead

Node Affinity

Limit the pod to get scheduled to one or more particular nodes

  • Lable the node
  • Set the nodeAffinity
  • operators: In, NotIn, Exists, DoesNotExist, Gt, Lt
  • 3 types

Combines the Taint/Toleration with NodeSelector or NodeAffinity to cover the scenarios

Resources

Request

  • The scheduling base on the resource requests
  • By default, k8s assumes a pod requires 0.5 cpu and 256Mi memory

Limit

  • By default, k8s limit a pod to 1 cpu and 512Mi memory
  • When a pod try to exceed resources beyond the limit
    • cpu: k8s throttles the cpu won’t kill
    • memory: k8s kill the pod with OOM

Static Pods

Use in creating control plane components (kube admin tools)

Without the intervention from the kube-api server, the kubelet can manage a node independently by monitor config files in the file system, and be able to create, recreate, update and delete the POD only object

  • –pod-manifest-path=/etc/Kubernetes/manifest
  • –config=kubeconfig.yaml (staticPodPath)

While the static pod created, the kube-api only get a readable mirror and not have the ability to update/delete it

Multiple Scheduler

  • copy the kube-scheduler configs from /etc/kubernetes/manifests
  • rename the scheduler --scheduler-name
  • if one master nodes with multiple scheduler:
    • set the --leader-elect=false
  • if multiple masters with multiple scheduler, only one scheduler can active at a time
    • set the --leader-elect=true
    • set the --lock-object-name to differentiate the custom scheduler from default if multiple master
  • specify the scheduler for pod by schedulerName
Read More »Kubernetes Short Notes (2)

Kubernetes Short Notes (1)

  • Devops

Cluster Architecture

Master Node

  • ETCD cluster
  • kube-scheduler
  • kube-controller-manager

These components communicate via kube-api server

Worker Node

  • container runtime engine, e.g. Docker, Rocket, ContainerD
  • kubelet: agent that runs and listen for instructions from kube-api
  • containers

The services deploy within worker nodes communicate with each other via kube-proxy

Objectives

ETCD

  • a distributed reliable key-value store
  • client commuications on port 2379
  • server to server on port 2380

kube-api

  • primary management component

  • setup:

    1. using kube-admin tools

      • deploy kube-api as a pod in kube-system namespace

      • the manifests is at /etc/kubernetes/manifests/kube-apiserver.yaml

      • the options is at /etc/systemd/system/kube-apiserver.service

      • search for kube-apiserver process on master node

  • example: apply deployment using kubectl

    1. authenticates user

  • validate the HTTP requests
  • the kube-scheduler monitored the changes from the kube-api, then:
    • retrieve the node information from kube-api

  • schedule the pod to some node through kube-api to kubelet

  • update the pod info to ETCD
  • kube-controller-manager

    • continuously monitors the state of components
    • the controllers packages into a single process called Kube-Controller-Manager, which includes:
      1. deployment-controller, cronjob, service-account-controller …
      2. namespace-controller, job-contorller, node-controller …
      3. endpoint-controller, replicaset, replication-controller(replica set) …
    • remediate situation

    kube-scheduler

    • decide which pod goes to which node
      1. filter nodes
      2. rank nodes

    kubelet

    • follow the instruction from kube-scheduler to controll the container runtime engine (e.g. docker) that run or remove a container
    • using kube-admin tools to deploy cluster, the kubelet are not installed by default in worker nodes, need intstall manually

    kube-proxy

    • runs on each nodes in the cluster
    • create iptables rules on each nodes to forward traffic heading to the IP of the services to the IP of the actual pods
    • kube-admin tool deploy kube-proxy as daemonset in each nodes

    pod

    • the container are encapsulated into a pod
    • is a single instance of an application, the smallest object in k8s
    • containers in same pod shares storages and network namespaces, created and removed in the same time
    • multi-container pod is rare use case

    ReplicationController

    • apiVersion support in v1
    • the process to monitor the pods
    • maintain the HA and specified number of pods that running on all nodes
    • only care about the pod which RestartPolicy is set to Always
    • scalable and replacable application should be managed by the controller
    • use cases: rolling updates, multiple release tracks (multiple replication controller replica the same pod but using different labels)

    ReplicaSets

    • next generation of ReplicationController
    • api version support in apps/v1
    • enhance the filtering in .spec.selector (the major difference)
    • be aware of the non-template pod that has same lables
    • using Deployment as a replacement is recommended, it own and manage its ReplicaSets

    Deployment

    • provide replication vis replicaset and other:
      • rolling update
      • rollout
      • pause and resume

    Namespace

    • namespaces created at cluster creation

      1. kube-system

    • kube-public
    • default
    • each namespace can be assigned quota of resources

    • a DNS entry with SERVICE_NAME.NAMESPACE.svc.cluster.local format is automatically created when at service creation

      1. the cluster.local is the default domain name of the cluster

    • permanently config the namespace

    Read More »Kubernetes Short Notes (1)