Blog:en
Bit Operation Pattern Useful in Algorithm Competion
Left Bit Shift (<<) for 2 ^ N
// This is 2^N
1 << N
1 << 2 <=> 000100 <=> 4 (2^2)
1 << 3 <=> 001000 <=> 8 (2^3)
1 << 4 <=> 010000 <=> 16 (2^4)
intToBitVector (Right bit shift to identify the value of a particular digit in a bit)
Make int
converted into vector<int>
.
This is used in the following Bit Blute-force Search
.
vector<int> intToBitVector(int target, int bitDigit) {
vector<int> S(N);
for (int i = 0; i < bitDigit; ++i) {
S[i] = (target >> i) & 1
};
return S;
}
intToBitVector(5);
// => vector<int>({ 1, 0, 1 })
intToBitVector(36);
// => vector<int>({ 1, 0, 0, 1, 0, 0 })
Q. How can we identify the value of a particular digit in a bit?
A. Right Bit Shift & 1.
Ex1) -------------------
target number = 32
(32 >> 5) & 1 <=> 000001 & 000001 <=> 000001 (return 1!)
(32 >> 4) & 1 <=> 000010 & 000001 <=> 000000 (return 0)
(32 >> 3) & 1 <=> 000100 & 000001 <=> 000000 (return 0)
(32 >> 2) & 1 <=> 001000 & 000001 <=> 000000 (return 0)
(32 >> 1) & 1 <=> 010000 & 000001 <=> 000000 (return 0)
(32 >> 0) & 1 <=> 100000 & 000001 <=> 000000 (return 0)
You can identify Digit 6 should be 1 in a bit row of `32`.
The bit row is `100000`. The function returns `vector<int>({ 1, 0, 0, 0, 0, 0})`
Ex2) -------------------
target number = 36
(36 >> 5) & 1 <=> 000001 & 000001 <=> 000001 (return 1!)
(36 >> 4) & 1 <=> 000010 & 000001 <=> 000000 (return 0)
(36 >> 3) & 1 <=> 000100 & 000001 <=> 000000 (return 0)
(36 >> 2) & 1 <=> 001001 & 000001 <=> 000001 (return 1!)
(36 >> 1) & 1 <=> 010010 & 000001 <=> 000000 (return 0)
(36 >> 0) & 1 <=> 100100 & 000001 <=> 000000 (return 0)
You can identify Digit 6 and Digit 3 should be 1 in a bit row of `36`.
The bit row is `100100`. The function returns `vector<int>({ 1, 0, 0, 1, 0, 0})`
Bit Blute-force Search
This is a code pattern to do Blute-force Search for N Digit Bit.
It uses Left Bit Shift & Right Bit Shift.
int bitDigit = 8;
// Blute-force search for 00000000 ~ 11111111
for (int bit = 0; bit < (1 << bitDigit); ++bit) {
vector<int> = intToBitVector(bit, bitDigit)
// Do Something for your search...
}
0. Background
I wrote video processing program with C++/clang++/OpenCV. I wanted to make the complement for OpenCV work well on my neovim. This is a memorandum for myself.
1. Check the lspconfig
I use neovim built-in lsp and Mason.
For language-server, I use clangd (installed with MasonInstall clangd
).
// ...
require("mason").setup()
require("mason-lspconfig").setup()
require("mason-lspconfig").setup_handlers({
function(server_name) -- default handler (optional)
lspconfig = require("lspconfig")
if server_name == "tsserver" then
// ...
elseif server_name == "clangd" then
lspconfig[server_name].setup({
on_attach = on_attach,
filetypes = { "c", "cpp" },
})
else
lspconfig[server_name].setup({
on_attach = on_attach,
})
end
end,
})
// ...
2. Write CmakeLists.txt
I write CMakeLists.txt for my building.
cmake_minimum_required(VERSION 3.1)
project(HelloOpenCV)
# Activate C+11 (required for OpenCV)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
# find OpenCV package
find_package(OpenCV REQUIRED)
# include OpenCV
include_directories(${OpenCV_INCLUDE_DIRS})
# executable
add_executable(main_app main.cpp)
# link OpenCV Libs
target_link_libraries(main_app ${OpenCV_LIBS})
3. Exec CMake
Let's exec make. Keep in mind that you need to add -DCMAKE_EXPORT_COMPILE_COMMANDS=ON
.
This option let the cmake generates compile_commands.json
.
clangd language-server read this file for complement.
mkdir build
cd build
cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=ON ..
4. link complie_commands.json to the project root.
cd /move/to/project/root
ln -s build/compile_commands.json .
5. (build)
cd build
cmake --build .
Graceful Shutdown on your web application.
What is a problem?
When you want to deploy your application frequently (e.g. every 5 minutes), it means you shutdown old application regularly. While shutting down the process, if some requests are aborted, it is an incident. We need to wait for these process to be done.
SIGTERM
SIGTERM is a process to "require" "termination, whereas SIGKILL is a process to "force" termination.
In these modern days, most of the cloud service send SIGTERM
at first when it finishes processes. Then, x seconds after, it sends SIGKILL
.
system | default behavior |
---|---|
k8s | SIGTERM -> 30 sec -> SIGKILL |
docker stop | SIGTERM -> 10 sec -> SIGKILL |
systemd | SIGTERM -> 30 sec -> SIGKILL |
Amazon ECS | SIGTERM -> 30 sec -> SIGKILL |
How to implement Grceful Shutdown on your web application.
It's very simple.
For Synchronous Operations (e.g. General API call)
-
- Accept SIGTERM
-
- Disable accepting new HTTP(TCP) connections
-
- Wait for all processing requests to finish
For Asynchrnonous Operations (e.g. Background Process)
-
- Accept SIGTERM
-
- Disable accepting new asynchronous operations
-
- Wait for all processing operations to finish
In order to wait for all requests to finish, we just set a grace period between SIGTERM -> SIGKILL to match the exepected maximum execution time of processes.
But, if it's too long, you'll see the phenonmenon that deploy doesn't finish forever ...
References
- https://link-and-motivation.hatenablog.com/entry/2022/09/29/090000
- https://qiita.com/suin/items/122946f7d0cd13b143f9
2024-01-03
Deploy mdbook on Cloudflare Page and Github Action
This blog is made with mdbook. Here is how it is hosted and deployed.
Infrastructure I use.
- Hosting: Cloudflare Page
- Continuous Deployment: Github Actions
Overview
%%{ init: {'theme': 'dark' }}%% sequenceDiagram feature branch->>main branch: Merge PR with content main branch->>github artifact: mdbook build & upload /dist directory github artifact->>production branch: download /dist directory production branch->>cloudflare page: commit /dist directory & deploy it
Deployment
Here's a step-by-step breakdown of the workflow.
- Trigger Workflow: The workflow is triggered when commit pushed on main branch.
- Checkout: The workflow begins by checking out the repository using the actions/checkout@v4 action.
- Setup mdBook: Next, the workflow sets up mdBook (a tool to create online books from Markdown files) using the peaceiris/actions-mdbook@v1 action.
- Build Docs: The mdbook build -d dist command is then run to compile the Markdown files into a book, placing the output in the
dist
directory. I specifieddist
because defaultbook
directory is ignored by.gitignore
. - Archive
dist
: The contents of thedist
directory are then archived using the actions/upload-artifact@v3 action. - Checkout Production Branch: In this step, the workflow checks out the production branch of the repository using the actions/checkout@v4 action. This branch is intended for the deployed version of the book.
- Download dist: The workflow then downloads the archived dist directory from the previous job using the actions/download-artifact@v3 action.
- Commit on production branch: The workflow updates the production branch with the new version of book(
dist
). - Cloudflare Pages auto deploy production branch: In cloudflare page settings, choose
production
branch as a deploy branch,dist
as a build output directory.
name: Deployment
on:
push:
branches:
- main
paths:
- "src/**.md"
- "book.toml"
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup mdbook
uses: peaceiris/actions-mdbook@v1
with:
mdbook-version: 'latest'
- name: Build docs
run: |
mdbook build -d dist
- name: Archive dist
uses: actions/upload-artifact@v3
with:
name: dist
path: dist
deploy:
runs-on: ubuntu-latest
needs: [build]
permissions:
actions: write
checks: write
contents: write
steps:
- name: Checkout production branch
uses: actions/checkout@v4
with:
ref: 'production'
- name: Download dist
uses: actions/download-artifact@v3
with:
name: dist
path: dist
- name: Copy book to production
run: |
git config --global user.name 'Github Actions'
git config --global user.email 'actions@github.com'
git add .
git commit -m "Deploy book to production"
git push
2023-12-28
Learning Error Handling From Golang.
Four Patterns of Error Handling
-
- Delegate to upstream.
-
- Log(Provide feedback to User).
-
- Retry
-
- Cease Continuation and Trigger Panic/Exit
Principles of Error Handling
Avoid delegating Error Handling to upstream as much as possible
The deeper the hierarchy, the more complex and difficult it becomes to follow the process. It's good that upstream just do "Logging Error" about Error Handling. Handling other than logging should be done close to the process calling.
Avoid defining Custom Error as much as possible.
Custom Error involves delegating error handling to upstream. Let's handle errors as close to the process as possible.
If your error message is good, you don't need stacktrace.
Stacktrace shows its value only when error message is not useful.
Wrap & Unwrap
Go 1.13 supports Wrap/Unwrap.
This is a common case before Go 1.13.
// Error Containing Another Error.
type QueryError struct {
Query string
Err error
}
func (e *QueryError) Error() string { return e.Query + ": " + e.Err.Error() }
The pattern of one error containing another error is pervasive(common). So, Go 1.13 supports for it.
Wrap
You can wrap error by %w.
if err != nil {
return fmt.Errorf("QueryError %w", err);
}
Unwrap
errors.Isor
errors.As` recursively checks the wrapped error (It calls Unwrap() internally).
if errors.Is(err, ErrPermission) {
// Some ErrorHandling
}
Whether to Wrap or Not?
It depends on the context. Do not wrap an errror when doing so would expose implementation details. You should care about Abstraction Violation.
Ex1. Parser
Imagine Parser which reads a complex data structure. If an error occurs, we wish to report the line and column number at whitch it occurred. It makes sense to expose the error produced by it.
Ex2. Database Caller
Imagine Function which makes several calls to a database. It should not return an error which unwrap s to the result of one of those calls. If the database used by the function is an implementation detail, then exposes errors is a violation of abstraction.
// BAD: Abstraction Violation.
// If you wrap sql.ErrNoRows, your caller need to know sql.ErrNoRows to handle the error.
err := repo.LookupUser
if errors.Is(err, sql.ErrNoRows)
Conclusion: Whether to wrap
- Whether you wrap or not, the error text will be the same. User gets the same information either way.
- The choice to wrap is about whether to give programs additional infomation so that they can make more informed decisions or to withhold that information to preserve an abstraction layer
References
- Golang Errors
- Working with Errors in Go1.13
- Error Handling and Go
- Go Error Handling Strategy / Goエラーハンドリング戦略
- errors package Unwrap Is As
DRAFT
https://zenn.dev/dmikurube/articles/curse-of-timezones-impl-ja https://qiita.com/tonluqclml/items/3affc5328d11b60583a9
これが概ねよいらしい https://www.m3tech.blog/entry/timezone-handling
Software Tradeoffs Mistakes にもあるらしい
Blog:ja
2024-04-32 :: Writing a interpreter in Go 感想文
概観
当書でつくった interpreter は以下のような構造になっている
mindmap root((Interpreter)) core lexer -- 字句解析 parser -- 構文解析 evaluator -- 評価器 sub token -- トークン ast -- 抽象構文木 object -- オブジェクト environment -- 環境 utils repl -- REPL test -- テスト
各モジュールの役割
Lexical Analysis(字句解析)
ソースコードからトークン列を生成する処理。
Source Code
|
v ==> Lexical Analysis (字句解析)
|
Tokens
|
v
|
Abstract Syntax Tree
let x = 5 + 5;
|
Lexical Analysis
|
v
[
LET,
IDENTIFIER("x"),
ASSIGN,
INTEGER(5),
PLUS,
INTEGER(5),
SEMICOLON
]
Parse(構文解析)
Tokenを抽象構文木などの構造的なデータに変換する処理
Source Code
|
v
|
Tokens
|
v ==> Parse(構文解析)
|
Abstract Syntax Tree
Evaluator
ASTをinputとして、それを何らかの命令として評価する。
Source Code
|
v
|
Tokens
|
v
|
Abstract Syntax Tree
|
v ==> Evaluator(評価器)
|
Result
Environment (Object)
TBD
Parse Strategy について
Pratt Parser という Parse 戦略を採用した。
( TODO: あとでまとめる )
開発の流れで印象的だったこと
最初にREPLをつくる
まっさきにREPLをつくって、手軽にデバッグできる環境をつくった。 個人的にはこれのおかげで学習のテンションがあがって楽しかった。
特に作ったことがないものをつくるとき「まず小さな機能をE2Eで動かすこと」を目指すのは、モチベ管理のために重要だと感じた。
徹底してUnitテストを書く
徹底してテスト駆動開発を行うスタイルだった。
- カバーしなければいけないinputの範囲も膨大、REPLで挙動を把握するのも面倒なので、テスト駆動で書くのは合理的。
- 小さいモジュールを積み重ねていって、最終的に1つの言語をつくるので テストなしでは挙動を管理しきれない。
References
Memo
tmux-session の 中で tmux を使いたいときがある。 たとえばローカルのtmuxでリモートサーバーにsshして、その中で更にtmuxを使いたいときなど。
そういうときは prefix key を使うのがよい
設定はこれだけ。Ctrl-a で tmux-session内でもprefixを送れるようにする。
bind-key a send-prefix
Nestしてtmuxを使いたいときは以下のようにすればよい
Ctrl-b a
https://blog.ccm-lulu.com/2013/02/tmux-nested-tmux-tmux-tmux.html https://mutelight.org/practical-tmux
gcloud ssh port forwarding
matches マクロ
matches! マクロ https://qiita.com/niwaka_dev/items/c5b2f5b6587e827a3247
パターンにマッチするかどうかだけを判定するマクロ。
テストで雑に判定したいときに便利。わざわざEqやPartialEqを実装する必要がない。
#![allow(unused)] fn main() { assert!(matches!(Some(42), Some(42))); }