AWSで動作させているWordPressの反応が遅いためいくつかテストをしたいと思います。
■ 評価方法
定量的に測定を行いたいと思いますので、abコマンドを使って反応スピードを見てみます。
利用するコマンド:
# ab -n リクエスト数 -c 並列数
レポート内のリクエスト数、時間、レートを測定値としました。
Requests per second: 23.53 [#/sec] (mean)
Time per request: 424.901 [ms] (mean)
Time per request: 42.490 [ms] (mean, across all concurrent requests)
Transfer rate: 200.96 [Kbytes/sec] received
■ リファレンス
まず、物理サーバに構成したNginx+FastCGIのレスポンスを測定して評価目安としたいと思います。
<サーバ構成>
- M/B : Intel DL61
- CPU : Core i3 3.3GHz
- MEM : 4G x2 = 8GB
- HDD : SSD, SATA 6Gbps x1
コマンド:ab -n 100 -c 10
Case-1:Nginx単体での応答
---------- Nginx/1.0.15 (CentOS 6.5) : localhost ----------
* ab -n 100 -c 10 : localhost(no cache)
Requests per second: 3560.11 [#/sec] (mean)
Time per request: 2.809 [ms] (mean)
Time per request: 0.281 [ms] (mean, across all concurrent requests)
Transfer rate: 27990.69 [Kbytes/sec] received
Case-2: Nginx Cache構成の応答
---------- Nginx/1.0.15 (CentOS 6.5) : localhost ----------
* ab -n 100 -c 10 : www.testsite.local (cache)
Requests per second: 5715.59 [#/sec] (mean)
Time per request: 1.750 [ms] (mean)
Time per request: 0.175 [ms] (mean, across all concurrent requests)
Transfer rate: 44937.73 [Kbytes/sec] received
キャッシュの効果が全ての項目で確認できます。
次に、ネットワークによる影響を把握するために、100M Switch/10M HUB経由でのレスポンスを確認します。
Case-3: Nginx Cache, Giga-Switch経由
---------- Nginx/1.0.15 (CentOS 6.5) : Giga Switch経由 ----------
Requests per second: 3495.53 [#/sec] (mean)
Time per request: 2.861 [ms] (mean)
Time per request: 0.286 [ms] (mean, across all concurrent requests)
Transfer rate: 27482.89 [Kbytes/sec] received
Case-4: Nginx Cache, 100M Switch経由
---------- Nginx/1.0.15 (CentOS 6.5) : 100M Switch経由 ----------
Requests per second: 1028.54 [#/sec] (mean)
Time per request: 9.723 [ms] (mean)
Time per request: 0.972 [ms] (mean, across all concurrent requests)
Transfer rate: 8089.58 [Kbytes/sec] received
Case-5: Nginx Cache, 10M Hub経由
---------- Nginx/1.0.15 (CentOS 6.5) : 10M Hub 経由 ----------
Requests per second: 50.81 [#/sec] (mean)
Time per request: 196.829 [ms] (mean)
Time per request: 19.683 [ms] (mean, across all concurrent requests)
Transfer rate: 400.31 [Kbytes/sec] received
■ 高トラフィック耐性の目安
リクエスト数を1万として並列数を変化させてみます。(ab -n 10000 -c 100->1000)
* ab -n 10000 -c 100 : cache
Requests per second: 12667.06 [#/sec] (mean)
Time per request: 7.894 [ms] (mean)
Time per request: 0.079 [ms] (mean, across all concurrent requests)
Transfer rate: 99592.31 [Kbytes/sec] received
* ab -n 10000 -c 200 : cache
Requests per second: 8910.93 [#/sec] (mean)
Time per request: 22.444 [ms] (mean)
Time per request: 0.112 [ms] (mean, across all concurrent requests)
Transfer rate: 70060.47 [Kbytes/sec] received
* ab -n 10000 -c 300 : cache
Requests per second: 4895.10 [#/sec] (mean)
Time per request: 61.286 [ms] (mean)
Time per request: 0.204 [ms] (mean, across all concurrent requests)
Transfer rate: 38486.75 [Kbytes/sec] received
* ab -n 10000 -c 400 : cache
Requests per second: 9041.10 [#/sec] (mean)
Time per request: 44.242 [ms] (mean)
Time per request: 0.111 [ms] (mean, across all concurrent requests)
Transfer rate: 70389.04 [Kbytes/sec] received
* ab -n 10000 -c 500 : cache
Requests per second: 9243.54 [#/sec] (mean)
Time per request: 54.092 [ms] (mean)
Time per request: 0.108 [ms] (mean, across all concurrent requests)
Transfer rate: 70912.57 [Kbytes/sec] received
* ab -n 10000 -c 1000 : cache
Requests per second: 9409.71 [#/sec] (mean)
Time per request: 106.273 [ms] (mean)
Time per request: 0.106 [ms] (mean, across all concurrent requests)
Transfer rate: 66970.71 [Kbytes/sec] received
応答スピードの判定値をいくつにするかですが、50ms程度が体感的には良いのではと思います(主観値)。
50msを適切と仮定すると、ネットワークの影響を除けば500並列くらいでは動作させられそうに見受けられます。
ネットワークの影響を加味するために、100M Switch(CISCO 2940)経由でレスポンスの変化をみてみます。
---------- Nginx/1.0.15 (CentOS 6.5) : 100M Switch経由 ----------
- 1万リクエスト/100並列
# ab -n 10000 -c 100
Requests per second: 1387.62 [#/sec] (mean)
Time per request: 72.066 [ms] (mean)
Time per request: 0.721 [ms] (mean, across all concurrent requests)
Transfer rate: 10909.90 [Kbytes/sec] received
---------- Nginx/1.0.15 (CentOS 6.5) : 100M Switch経由 ----------
- 1万リクエスト/200並列
# ab -n 10000 -c 200
Requests per second: 1329.23 [#/sec] (mean)
Time per request: 150.463 [ms] (mean)
Time per request: 0.752 [ms] (mean, across all concurrent requests)
Transfer rate: 10450.80 [Kbytes/sec] received
---------- Nginx/1.0.15 (CentOS 6.5) : 100M Switch経由 ----------
- 1万リクエスト/250並列
# ab -n 10000 -c 250
Requests per second: 1353.75 [#/sec] (mean)
Time per request: 184.672 [ms] (mean)
Time per request: 0.739 [ms] (mean, across all concurrent requests)
Transfer rate: 10643.82 [Kbytes/sec] received
Total transferred: 805100 bytes
HTML transferred: 776500 bytes
レスポンススピードの低下をみると並列数:100でも厳しいでしょうか。
検証環境での値からイメージが出来てきましたので、AWS環境を調査します。
* ab -n 100 -c 10 -H "Accept-Encoding: gzip, deflate" http:///
*** Apache + WordPress 3.8 + SuperCache **
---------- Apache (AWS Linux) ----------
Requests per second: 0.55 [#/sec] (mean)
Time per request: 18242.125 [ms] (mean)
Time per request: 1824.213 [ms] (mean, across all concurrent requests)
Transfer rate: 17.06 [Kbytes/sec] received
現状が非常にレスポンスもリクエスト処理も遅いことは先のテストからわかりますが、遅すぎます。
*** Nginx/1.4.3 + Cache + WordPress 3.8 + SuperCache ***
---------- Nginx/1.4.3 (AWS Linux) ----------
Requests per second: 186.23 [#/sec] (mean)
Time per request: 53.697 [ms] (mean)
Time per request: 5.370 [ms] (mean, across all concurrent requests)
Transfer rate: 5836.17 [Kbytes/sec] received
単純にNginx+Cacheに入れ替えするだけでも大幅な改善は見られました。
並列数10の条件ではリーズナブルと考えてもいいかもしれません。
■ Nginxのプロセス数による変化
* Nginx Process: #1 : AWS
Requests per second: 176.96 [#/sec] (mean)
Time per request: 56.510 [ms] (mean)
Time per request: 5.651 [ms] (mean, across all concurrent requests)
Transfer rate: 5590.17 [Kbytes/sec] received
* Nginx Process: #2 : AWS
Requests per second: 196.26 [#/sec] (mean)
Time per request: 50.952 [ms] (mean)
Time per request: 5.095 [ms] (mean, across all concurrent requests)
Transfer rate: 6227.37 [Kbytes/sec] received
* Nginx Process : #4 : AWS
Requests per second: 240.52 [#/sec] (mean)
Time per request: 41.577 [ms] (mean)
Time per request: 4.158 [ms] (mean, across all concurrent requests)
Transfer rate: 7507.20 [Kbytes/sec] received
* Nginx Process : #8 : AWS
Requests per second: 223.79 [#/sec] (mean)
Time per request: 44.686 [ms] (mean)
Time per request: 4.469 [ms] (mean, across all concurrent requests)
Transfer rate: 6985.02 [Kbytes/sec] received
プロセス数は4が最小構成のEC2ではよさそうです。
* ab -n 1000 -c 100
Requests per second: 181.94 [#/sec] (mean)
Time per request: 549.644 [ms] (mean)
Time per request: 5.496 [ms] (mean, across all concurrent requests)
Transfer rate: 5741.07 [Kbytes/sec] received
そんなにアクセスは無いので問題ないのですが、並列数100で0.5秒のレスポンスです。
■ まとめ
無料枠でも10アクセス/秒程度であれば、曖昧な表現でいえば「いけそう」になったのでは無いかと思います。
EC2のサーバ構成をリクエスト/秒の処理性能を上げるように変更し、autoscaleやelastcacheを用いて並列度を上げればデータベースの応答を除くと少し実用的になるのではないでしょうか。