Google Ads API Beta(v2.1) Short Notes

Google Ads API


  • Developer token
    • generate from a Google Ads Manager Account
    • you can use the same token for not linked account
    • token state: pending, approved
    • only approved token can connect to the API for production
    • Use any developer token if you using Test Manager Account view doc
  • Custermer ID
    • the account number of an Google Ads account
    • can be set or replace programmatically in the SDK
  • OAuth2 client credential
    • generate from a GCP Project/API Console
    • use for managing the API users
  • Client Library

Basic first call (installed application flow)

Step 1

  1. register an Google Ads Manager Account for production use
  2. take the developer token from UI
    > TOOL > SETTINGS > API Center

step refs:

You must use a production (non-test) manager account’s developer token to make API calls against a test account. Even if the token is pending approval, you can still use it to make calls against test accounts.

Step 2

  1. create a Google Account for testing
  2. use the Google Account to register a Google Ads Manager Account
  3. login and create one customer account (the customer created by test manager account will be test only)
  4. create a campaign under the customer account

step refs:


Step 3

If already has an OAuth client:

  1. assume the client ID is
  2. go to API console, find the OAuth2.0 client ID then download the client secret json file
  3. use this secret to request a generate refresh token


  1. create a GCP project
  2. enable the GoogleAds API in API console page
  3. create a OAuth client, assume the client ID is
  4. download the client secret json file
  5. use this secret to request a generate refresh token

step refs:


When requesting an OAuth2 refresh token, make sure you’re logged in as the test manager account user

To access a test account using OAuth2, the test manager account user must grant permission to your client application. Therefore, when requesting a refresh token, ensure you’re logged in as the test manager account rather than the production manager account.

you want to switch from the test manager account to the production manager account, simply reconfigure the client library to use the production manager account’s refresh token.

Step 4

  1. install client library
  • create an sdk config file google-ads.yml and insert the values:

    • developer_token
    • client_id
    • client_secret
    • refresh_token
    • logincustomerid
  • initial a client object that loads the config file
  • make the first call to search the campaign we just created

  • Read More »Google Ads API Beta(v2.1) Short Notes


    • Quote



    Kubernetes Short Notes (2)

    • Devops


    Manual Scheduling

    • Bind the pod to node by nodeName property, before that, the pod stays in the Pending state

    • Manutal ways to bind:

      • specify the spec.nodeName, not updatable

      • create the Binding object


    Use to group and select the objects, for example a ReplicaSet object configs:

    • metadata.labels sets the ReplicaSet itself
    • spec.template.metadata.lables sets the Pod
    • spec.selector.matchLabels defines how ReplicaSet to discover the Pod


    Use to record other details for intergration purpose e.g. build info, contact details



    Limit pods without tolerations cannot get scheduled to a tainted node

    • Taint the nodes

    • Set the pods’ tolerance, three behavior are avaliable if not tolerant:

      • NoSchedule

    • PreferNoSchedule: not guaranteed
    • NoExecute: new pods=NoSchedule, existed pods=evicted

    Note the value in tolerations keys must use double quotes

    Node Selector

    Limit the pod to get scheduled to one kind of node only

    • Lable the node
    • Set the nodeSelector

    Note there is no OR or NOT conditions, use node affinity instead

    Node Affinity

    Limit the pod to get scheduled to one or more particular nodes

    • Lable the node
    • Set the nodeAffinity
    • operators: In, NotIn, Exists, DoesNotExist, Gt, Lt
    • 3 types

    Combines the Taint/Toleration with NodeSelector or NodeAffinity to cover the scenarios



    • The scheduling base on the resource requests
    • By default, k8s assumes a pod requires 0.5 cpu and 256Mi memory


    • By default, k8s limit a pod to 1 cpu and 512Mi memory
    • When a pod try to exceed resources beyond the limit
      • cpu: k8s throttles the cpu won’t kill
      • memory: k8s kill the pod with OOM

    Static Pods

    Use in creating control plane components (kube admin tools)

    Without the intervention from the kube-api server, the kubelet can manage a node independently by monitor config files in the file system, and be able to create, recreate, update and delete the POD only object

    • –pod-manifest-path=/etc/Kubernetes/manifest
    • –config=kubeconfig.yaml (staticPodPath)

    While the static pod created, the kube-api only get a readable mirror and not have the ability to update/delete it

    Multiple Scheduler

    • copy the kube-scheduler configs from /etc/kubernetes/manifests
    • rename the scheduler --scheduler-name
    • if one master nodes with multiple scheduler:
      • set the --leader-elect=false
    • if multiple masters with multiple scheduler, only one scheduler can active at a time
      • set the --leader-elect=true
      • set the --lock-object-name to differentiate the custom scheduler from default if multiple master
    • specify the scheduler for pod by schedulerName
    Read More »Kubernetes Short Notes (2)

    Kubernetes Short Notes (1)

    • Devops

    Cluster Architecture

    Master Node

    • ETCD cluster
    • kube-scheduler
    • kube-controller-manager

    These components communicate via kube-api server

    Worker Node

    • container runtime engine, e.g. Docker, Rocket, ContainerD
    • kubelet: agent that runs and listen for instructions from kube-api
    • containers

    The services deploy within worker nodes communicate with each other via kube-proxy



    • a distributed reliable key-value store
    • client commuications on port 2379
    • server to server on port 2380


    • primary management component

    • setup:

      1. using kube-admin tools

        • deploy kube-api as a pod in kube-system namespace

        • the manifests is at /etc/kubernetes/manifests/kube-apiserver.yaml

        • the options is at /etc/systemd/system/kube-apiserver.service

        • search for kube-apiserver process on master node

    • example: apply deployment using kubectl

      1. authenticates user

    • validate the HTTP requests
    • the kube-scheduler monitored the changes from the kube-api, then:
      • retrieve the node information from kube-api

    • schedule the pod to some node through kube-api to kubelet

  • update the pod info to ETCD
  • kube-controller-manager

    • continuously monitors the state of components
    • the controllers packages into a single process called Kube-Controller-Manager, which includes:
      1. deployment-controller, cronjob, service-account-controller …
      2. namespace-controller, job-contorller, node-controller …
      3. endpoint-controller, replicaset, replication-controller(replica set) …
    • remediate situation


    • decide which pod goes to which node
      1. filter nodes
      2. rank nodes


    • follow the instruction from kube-scheduler to controll the container runtime engine (e.g. docker) that run or remove a container
    • using kube-admin tools to deploy cluster, the kubelet are not installed by default in worker nodes, need intstall manually


    • runs on each nodes in the cluster
    • create iptables rules on each nodes to forward traffic heading to the IP of the services to the IP of the actual pods
    • kube-admin tool deploy kube-proxy as daemonset in each nodes


    • the container are encapsulated into a pod
    • is a single instance of an application, the smallest object in k8s
    • containers in same pod shares storages and network namespaces, created and removed in the same time
    • multi-container pod is rare use case


    • apiVersion support in v1
    • the process to monitor the pods
    • maintain the HA and specified number of pods that running on all nodes
    • only care about the pod which RestartPolicy is set to Always
    • scalable and replacable application should be managed by the controller
    • use cases: rolling updates, multiple release tracks (multiple replication controller replica the same pod but using different labels)


    • next generation of ReplicationController
    • api version support in apps/v1
    • enhance the filtering in .spec.selector (the major difference)
    • be aware of the non-template pod that has same lables
    • using Deployment as a replacement is recommended, it own and manage its ReplicaSets


    • provide replication vis replicaset and other:
      • rolling update
      • rollout
      • pause and resume


    • namespaces created at cluster creation

      1. kube-system

    • kube-public
    • default
    • each namespace can be assigned quota of resources

    • a DNS entry with SERVICE_NAME.NAMESPACE.svc.cluster.local format is automatically created when at service creation

      1. the cluster.local is the default domain name of the cluster

    • permanently config the namespace

    Read More »Kubernetes Short Notes (1)

    Generator as Coroutines

    • Python

    Generator as Coroutines

    • cooperative multitasking (cooperative routines)
    • concurrent not parallel (python program execute on a single thread)

    The way to create coroutines:

    • generators (asyncio)
    • native coroutines (using async /await)


    • concurrency: tasks start, run and complete in overlapping time periods
    • parallelism: tasks run simultaneousely


    • cooperative: control relinquished to other task voluntarily, control by application(developer)
    • preemptive: control relinquished to other task involuntarily, control by the OS.

      some sort of scheduler involved


    • Global Interpreter Lock(GIL)

      Only one native thread excutes at a time.

      Use Process based parallelism to avoid GIL. Not Thread based.

      The Python threading module uses threads instead of processes. Threads uniquely run in the same unique memory heap. Whereas Processes run in separate memory heaps. This makes sharing information harder with processes and object instances. One problem arises because threads use the same memory heap, multiple threads can write to the same location in the memory heap which is why the global interpreter lock(GIL) in CPython was created as a mutex to prevent it from happening.

    Make the right choice

    • CPU Bound => Multi processing
    • I/O Bound, Fast I/O, Limit Connections => Muilti Threading
    • I/O Bound, Slow I/O, Many Connections => Concurrency

    Use deque

    Much more efficient way to implement the stack and queue.

    Operate 10,000 items take 1,000 times average:

    (times in seconds) list deque
    append(right) 0.87 0.87
    pop(right) 0.002 0.0005
    insert(left) 20.8 0.84
    pop(left) 0.012 0.0005

    Use unlimited deque with deque() or deque(iterable)
    Use limited deque with deque(maxlen=n). If full, a corresponding number of items are discarded from the opposite end.

    Implement producer / consumer coroutine using deque

    Implement simple event loop

    Read More »Generator as Coroutines

    Context Manager

    • Python

    Context Manager

    what is context

    the state surrounding a section of code

    why we need a context manager

    • writing try/finally every time can get cumbersom
    • easy to forget closing the file

    use cases

    Useful for program that needs Enter / Exit handeling

    • create / releasing resources
    • database transaction
    • set and reset decimal context

    Common patterns

    • open / close
    • lock / release
    • change / reset
    • start / stop
    • enter / exit


    implement these two dunder methods:

    • __enter__

      perform the setup, optionally return an object

    • __exit__

      receives error (silence or propagate)

      • need arguments exc_type, exc_value, exc_trace to handle exception
      • return True to silence exception

      perform clean up



    nested contexts