| Back to logs list
Reprinted from 35851599 at 18:37 on August 18th, 2010 Read (loading. ..) Comments (0) Category: Personal Diary
With the information explosion, intensification, micro-blog site Twitter has turned out. Turned out to use the word to describe the growth of Twitter is not an exaggeration. Twitter from May 2006 on the line, to December 2007, a year and a half years, Twitter users increased from 0 to 6.6 million. Then, after a year, in December 2008, Twitter users had reached 5 million. [1]
Twitter site's success, the prerequisite is the ability to simultaneously provide services to millions of users, and provide services faster. [2,3,4]
view was expressed that, Twitter's business logic is simple, competition is the low threshold. First phrase is correct, but the last part is questionable. Twitter's competitiveness can not do without rigorous system architecture design.
【1】 is the first step is easy
Twitter's core business logic is to Following and Be followed. [5]
When you write a message and delivered later, your followers will immediately see their personal home page of the latest messages you write. This is the process be followed.
business processes to achieve this seems very easy.
3. When you write a text message,
Very complete knowledge of pregnancy - Qzone log, Twitter to inspect the Be-followed form, find all the followers of the IDs. Then one by one update those followers of the home page.
If you have read his Twitter follower is personal home page, main page automatically every JavaScript implied tens of seconds, access to the server about Twitter, check is to see whether the personal home page for updates. If there are updates, download new home page content immediately. This follower will be able to read the newly released text messages.
architecture from the system, it seems that the traditional syllogism (Three-tier architecture [6]), sufficient to meet the needs of the business logic. In fact, the initial system architecture Twitter indeed is the syllogism.
Reference:
[1] Fixing Twitter. (
[2] Twitter blows up at SXSW conference. (
[3] First Hand Accounts of Terrorist Attacks in India on Twitter and Flickr. (
[4] Social Media Takes Center Stage in Iran. (
[5] Twitter, these are. ((
[6] Three tier architecture.
【2】 syllogism
site architecture design, the traditional approach is syllogism. The so-called Large site architecture design, emphasizing practical. Fashion design, of course attractive, but the technology may be immature, high risks. Therefore, many large sites, taking the traditional path of sound.
2006 年 5 月 Twitter just on the line when, in order to simplify the website development, they used the Ruby-On-Rails tools, and Ruby-On-Rails design ideas, is the syllogism.
1. front, that the presentation layer (Presentation Tier) using the tools are Apache Web Server, the main task is to parse HTTP protocol, from different users, different types of requests for distribution to the logical layer.
2. the middle of the logic layer (Logic Tier) using the tools are Mongrel Rails Server, using Rails ready-made modules, reducing development effort.
3. After the segment, the data layer (Data Tier) tool is used MySQL database.
after the first paragraph that the data layer.
Twitter service, can be summarized as two cores, 1. users, 2. SMS. The relationship between users, is the relationship between chasing and being chased, that is, Following and Be followed. For a user, he read his The written text of his own, only those who Seize the two core, not hard to understand other functions of Twitter is how to achieve [7].
around these two core, you can start the design Data Schema, that is stored in the data layer (Data Tier) the data is organized. May wish to set up three tables [8],
1. User Table: User ID, name, login name and password, status (online or not).
3. customer relationship table records the relationship between chasing and being chased: user ID, he chases the user IDs (Following), chasing his user IDs (Be followed).
say the middle of the logical layer.
published a text message when a user when the following five steps,
1. to record the message to the
2. from the
3. Some of the users currently online chasing him, and some may be offline. Online or not the state, in the Filter out those offline users IDs.
4. to those who went after him and the current online user IDs, one by one to promote a queue (Queue) to go.
5. From this queue, one by one those who chase him out and the current online user IDs, and updating these people home, that is to add the latest released this message.
these five steps above, by the logic layer (Logic Tier) responsible. Easy to solve the former three steps are simple database operations. The last two steps, need to use a tool to queue. The significance of the queue, the task of separating the generation and implementation of the mandate.
queue implementation in different ways, such as Apache Mina [9] can be used to make the queue. But the Twitter team to achieve their own hands, a queue, Kestrel [10,11]. Mina and the Kestrel, what their strengths and weaknesses, it seems no one has made a detailed comparison.
Kestrel
whether or Mina, looks very complicated. Some people may ask, why not simply to achieve queue data structure, such as dynamic lists, or even a static array? If the logical layer only run on one server, then the dynamic and static array list data structure that is simple, a little transformation, indeed as a queue to use. Kestrel and Mina such In this series, the length of the future will be highlighted.
talk about the front end, the presentation layer.
the presentation layer has two main functions, 1. HTTP protocol handler (HTTP Processor), including the dismantling of the received user requests, and package need to issue the results. 2. Dispatcher (Dispatcher), the received user request, the machine logic tier distributed processing. If the logic layer is only one machine, then the dispatcher meaningless. However, if the logic layer formed by a number of machines, what kind of request, sent to the logical layer inside which machine to a great verses. The logical layer of many machines,
new balance 574, may be responsible for their own specific functions, and between machines in the same function, to share work, so that load balancing.
Twitter
visit the site, not just the browser, but also mobile phones, as well as a computer desktop tool QQ, in addition to all sorts of websites plug-in in order to come up other sites linked to Twitter.com [12]. So, Twitter and the Twitter website visitors between the protocol, not necessarily HTTP, there are also other protocols.
Twitter syllogism
framework, mainly for HTTP protocol terminal. But for other protocols end, Twitter is no obvious structure is divided into three sections, but the presentation and logical layers into one, in Twitter's literature, this combo is often referred to as
In summary, a basic function to complete the Twitter, the simple structure as shown in Figure 1. Perhaps we may feel confused, so well-known sites, structures that simple? Yes and No, 2006 年 5 月 just on the line when Twitter, Twitter little difference between architecture and Figure 1, the place is not the same as adding a few simple cache (Cache). Even to now, Twitter's architecture can still clearly see the outline of Figure 1.
Figure 1. The essential 3-tier of Twitter architecture
Courtesy
Reference,
[7] Tweets commonly used tools (
[8] construct PHP-based micro-blog service (
[9] Apache Mina Homepage (
[10] Kestrel Readme (
[11 ] A Working Guide to Kestrel. (
[12] Alphabetical List of Twitter Services and Applications (
【3】 Cache == Cash
Cache == Cash, cash income is equal to the cache. Although the words a bit exaggerated, but the proper use of the cache, for large Web sites, is critical event. Site in response to user requests when the reaction rate, affects the user experience is a major factor. Affect the speed of many reasons, one important reason is hard to read and write (Disk IO).
Table 1 compares the memory (RAM), hard disk (Disk), and new types of flash memory (Flash), in comparing the speed of reading and writing. Hard disk read and write speed slower than the memory of millions of times. Therefore, to improve site speed, an important measure as the data cached in memory. Of course, in the hard disk also must retain a copy in case the power to prevent this, the memory in the data loss occurred.
Table 1. Storage media comparison of Disk, Flash and RAM [13]
Courtesy
Twitter engineers believe that a good user experience, web site, when a user requests arrival, should be completed within an average of 500ms to respond. The Twitter goal is to 200ms-300ms reaction time [17]. Therefore, site architecture, Twitter large-scale, multi-level approach to use the cache. Twitter practical use in the cache, and summed up these practices from the lessons learned, is Twitter site architecture a spectacle.
Figure 2. Twitter architecture with Cache
Courtesy
where they need to cache? Disk IO where the more frequent, the more need to cache.
said before, Twitter has two core businesses, users, and text messaging (Tweet). Around these two core, a number of tables stored in the database, the most important of three, as shown below. The three table settings, is the spectator of speculation, the settings may not fully consistent with Twitter. But the original aim, I believe that even if different, would not essentially different.
1. User Table: User ID, name, login name and password, status (online or not).
Figure 3. Cache decreases Twitter.com load by 50% [17]
Courtesy
Reference,
[12] Alphabetical List of Twitter Services and Applications.
(
[13] How flash changes the DBMS world.
(
[14] Improving running component of Twitter.
(
EvanWeaver_ImprovingRunningComponentsAtTwitter.pdf )
[15] A high-performance, general-purposed, distributed memory object caching system.
(
[16] Updating Twitter without service disruptions.
(
[17] Fixing Twitter. (
Fixing_Twitter_Improving_the_Performance_and_Scala bility_of_the_World_s_
Most_Popular_Micro-blogging_Site_Presentation% 20Presentation.pdf)
[18] Varnish, a high-performance HTTP accelerator.
(
[19] How to use Varnish in Twitter.com?
(
[20] CacheMoney Gem, an open-source write-through caching library.
(
【4】 flood need to be isolated
How
If Twitter Using Cache is a major part, then the other big surprise in its message queue (Message Queue). Why use a message queue? [14] explained that the
order to understand the meaning of this passage, we might look at an example. January 20, 2009 Tuesday, U.S. President Barack Obama and his inauguration speech. As the first black U.S. president, Obama's inauguration has caused strong repercussions, leading to Twitter traffic soared, as shown in Figure 4.
Figure 4. Twitter burst during the inauguration of Barack Obama, 1/20/2009, Tuesday
Courtesy
one peak moment, Twitter site received 350 new messages per second, the peak flow lasted about 5 minutes. According to statistics, the average Twitter user is 120 This means that, in this 5-minute peak time, Twitter site needs to be sent per second 350 x 120 = 42,000 text messages.
the face of flood, how can we ensure the site does not crash? Approach is to quickly accepted, but the delayed service. Analogy, the dinner rush hour, the restaurant is always full. For new customers, waiter was not shut out, but to let these customers waiting in the lounge. This is [14] called
Apache how many concurrent connections can accommodate it? [22] The experimental results are 4,000, see Figure 6. How can I improve Apache's concurrent user capacity it? A connected idea is not to be subject to the process. May wish to connect as a data structure, stored into memory,
new new balance shoes, the release process, until the Mongrel Server to return results, then reload the data structure to the process up.
fact Yaws Web Server [24], is to do so [23]. Therefore, Yaws can accommodate more than 80,000 simultaneous connections, it is not surprising. But why Twitter with Apache, rather than Yaws it? Perhaps because of Yaws is written in Erlang, and Twitter engineers are not familiar with this new language (But you need in house Erlang experience [17]).
Figure 5. Apache web server system architecture [21]
Courtesy
Figure 6. Apache vs. Yaws.
The horizonal axis shows the parallel requests,
the vertical one shows the throughput (KBytes / second).
The red curve is Yaws, running on NFS.
The blue one is Apache, running on NFS,
while the green one is also Apache but on a local file system.
Apache dies at about 4,000 parallel sessions,
while Yaws is still functioning at over 80,000 parallel connections. [22]
Courtesy
Reference,
[14] Improving running component of Twitter.
(
2009/slides/EvanWeaver_ImprovingRunningComponentsAtTwitter.pdf )
[16] Updating Twitter without service disruptions.
(
twitter-without- service-disruptions /)
[17] Fixing Twitter. (
Improving_the_Performance_and_Scalability_of_the_W orld_s_Most_Popular_
Micro-blogging_Site_Presentation% 20Presentation.pdf)
[21] Apache system architecture.
(
groene_et_al_2002-architecture_recovery_of_apache . pdf)
[22] Apache vs Yaws. (
[23] questioned the performance of Apache and Yaws comparison. (
[24] Yaws Web Server. (
[25] Erlang Programming Language. (
【5】 data flow and control flow
Apache process by allowing air circulation approach, quickly accepted the user's access, delayed service, saying that white is a delaying tactic to enable users to Buzhi Yu receive with (Service Unavailable)
Dayu, focusing on ease. The real flood control capacity, flood storage and flood reflected in two aspects. Flood storage and easy to understand, is the construction of reservoirs, or build a large reservoir, or making many small reservoirs. Spillway consists of two aspects, 1. Drainage, 2. Channels.
for Twitter systems, large server clusters, especially in the many MemCached cache-based, reflecting the flood storage capacity. The means of drainage Kestrel message queue, used to pass control commands. Channel between the machine and machine data transmission channel, especially the leading MemCached data channel. The merits of channels, is whether the patent.
Twitter design, and Yu's approach, physiognomy far, real close. Twitter system, flood measures, reflected in the effective control of data flow, ensuring peak arrives, the timely evacuation of the data up to multiple machines, thus avoiding the excessive concentration of pressure, causing paralysis of the entire system.
2009 年 6 月, Purewire company climbed Twitter site, track Twitter users In that 7 million users, excluding those not chasing others, not isolated by others users to recover. Does not include the island group, island to recover the user only and are chasing each other, not with the outside world. If you include these isolated islands users and user groups, the total number of current users of Twitter, perhaps no more than 1 million.
ended March 2009, the number of mobile users in China reached 470 million [27]. If China Mobile's Fetion [28] and 139 lobbyists [29] would also like to Twitter direction, then the flying letters and 139 of the flood control capacity should be designed to how much? Simply speaking, the existing system needs to scale Twitter, at least 47 times magnification. So, some people commented that the mobile Internet industry,
But in any case,
new balance mens, he can jade stone hills. This is our system architecture of Twitter, in particular the purpose of the mechanism of its flood.
If the reader's browser has been logged prior Twitter site, the connection is established, then the Apache to the reader is assigned a process that also in the air circulation state. Mongrel readers to the home page updated, pass to the appropriate process, which is home to the reader to pass to the reader active browser.
first glance, the process does not seem complicated. Known under Twitter, what is beauty? Worth a closer look at a lot of things to watch.
Reference,
[26] Twitter user statistics by Purewire, June 2009.
(
[27] As in March 2009, the number of mobile users in China reached 470 million.
(
[28 ] China Mobile Fetion network. (
[29] China Mobile, said customer network 139. (
【6】 flow peak and cloud computing
annual U.S. football finals, nicknamed the Super Bowl (Super Bowl). Super Bowl ratings in the United States, equivalent to China's CCTV Spring Festival Gala. February 3, 2008, Sunday, Super Bowl that year as scheduled. New York Giants (Giants), against the Boston Patriots (Patriots). This is the very strength of two teams, the final results are unpredictable. Competition attracted nearly a million Americans watched live television.
for Twitter, it can be expected, the race course, Twitter traffic inevitably rose. The more intense competition, the more high traffic. Twitter can not be expected, what will rise to much traffic, especially in peak periods, traffic will reach the number.
According to [31] statistics, in the Super Bowl match, the flow per minute,
new balance running shoes, compared with the daily average flow rate, an average of 40%. When the most intense in the game, but up to 150% or more. And a week ago, January 27, 2008, a quiet Sunday compared to the same period, traffic fluctuations from an average of 10%,
new balance shoes, up to 40%, the maximum fluctuation from 35%, up to 150% or more.
Figure 8. Twitter traffic during Super Bowl, Sunday, Feb 3, 2008 [31]. The blue line represents the percentage of updates per minute during the Super Bowl normalized to the average number of updates per minute during the rest of the day, with spikes annotated to show what people were twittering about. The green line represents the traffic of a
Thus, Twitter traffic fluctuations are substantial. For Twitter the company, if the pre-purchase of enough equipment to withstand the flow of change, especially major events leading to the peak flow, then the device is idle most of the time, is not the economy. However, if the lack of adequate equipment, then the face of major events, Twitter system could collapse, the consequences of that churn.
how to do? Approach is to change to buy for rent. Twitter's own purchase of the equipment, its size to cope with a major event without the flow pressure is limited. Cloud computing platform companies also lease equipment to cope with major events to a temporary peak flow. Leasing benefits of cloud computing is real-time allocation of computing resources, the demand is high, automatically allocate more computing resources.
Twitter the company before 2008, has been leasing Joyent's cloud computing platform. In the February 3, 2008 in the Super Bowl approaching, Joyent agreed to Twitter, free of charge during the game providing additional computing resources to meet Fu Hongfeng flow [32]. But strangely, from the competition is less than 4 days, Twitter company suddenly January 30 at 10 pm, stop using Joyent's cloud computing platform, instead defected to Netcraft [33,34].
Twitter abandoning Joyent, cast Netcraft, the underlying reason is that business disputes, or worry about Joyent services are not reliable, and still remains a mystery.
change to buy for rent, respond to peak, this is a good idea. Rent computing resources, but how to use, but also a big problem. Look at [35], can easily find Twitter to lease to computing resources, the majority used to increase the Apache Web Server, and Apache is the forefront of the whole system Twitter link.
Why Twitter rarely
leased computing resources allocated to the Mongrel Rails Server, MemCached Servers, Varnish HTTP Accelerators, and so the other part? In answering this question before, let's review what the previous chapter,
these 6 steps before said step 2, each web browser to access Twitter, to keep a long connection with the site. Aim is that once someone post a new message, Twitter site in less than 500ms, push the new message to his readers. The problem is not updated when the connection for each occupy a long process of Apache, and this process is an empty circle. Therefore, the vast majority of Apache processes, most of the time, in air circulation, thus taking up a lot of resources.
In fact, the flow through the Apache Web Servers, although only 10% of the total flow Twitter -20%, but Apache is taking up the entire server cluster Twitter 50% of the resources [16]. So, from the spectator point of view, Twitter in the future is bound to oust Apache. But now, when Twitter allocate computing resources, compelling, can only give priority to ensuring the needs of Apache.
only one compelling reason, on the other hand, also shows that Twitter's engineers, the other aspects of their systems, too confident.
in the fourth chapter, Customers wait in the lounge. For Twitter systems, Apache's role is to act as a lounge. As long as the lounge is large enough, you can temporarily stabilize the user other words jargon, do not let the user receives the HTTP-503 error message.
stabilize the user after the next job is to provide services efficiently. Efficient service, business process embodied in the Twitter post step 6 step 4. Why is Twitter so much confidence in these four steps?
Reference,
[16] Updating Twitter without service disruptions.
(
[30] Giants and Patriots draws 97.5 million US audience to the Super Bowl. (
[31] Twitter traffic during Super Bowl 2008.
(
[32] Joyent provides Twitter free extra capacity during the Super Bowl 2008.
(
[33] Twitter stopped using Joyent's cloud at 10PM, Jan 30, 2008. (
[34 ] The hasty divorce for Twitter and Joyent.
(
[35] The usage of Netcraft by Twitter.
(
【7】 incomplete as a progress
incomplete work, the architecture design is an improvement.
When a user requests from the browser reach the Twitter back-office systems, the first one to meet it, is the Apache Web Server. The second appearance is Mongrel Rails Server. Mongrel upload both to deal with the request, is also responsible for processing the download request. Mongrel handle upload and download business logic is very simple, but beneath the surface simplicity, but it contains unconventional design. The unconventional design, is certainly not the result of negligence, in fact, this is the Twitter architecture, the most notable highlights.
Figure 9. Twitter internal flows
Courtesy
called upload, the user is writing a new text message, upload to Twitter for publication. The download is updated readers Twitter home page, add the latest message published. Twitter a way to download, not the reader take the initiative to pull the request the way, but the initiative to Twitter servers push new content to the readers way. Look at the upload, Mongrel handle upload logic is simple, in two steps.
Why Twitter
taken this unconventional way of working is not complete? Answer this question before, may wish to look at the Mongrel handle the download logic. Upload and download logic to link the two, compare, help to understand. Mongrel The logic is very simple to download, but also in two steps.
control Mongrel two paragraphs dealing with upload and download logic, the logic is not difficult to find each The so-called incomplete work, reflecting the Twitter architecture design of the two First, to a complete business process, dividing the work into paragraphs relatively independent, each by the same machine, different process is responsible for, or even responsible for the different machines. Second, the collaboration between multiple machines, refinement and control commands for data transfer, emphasizing the data flow and control flow separation.
split the business process approach, the initiative is not Twitter. In fact, the structure of the syllogism, the purpose is also split process. Responsible for parsing HTTP Web Server, Application Server is responsible for business logic, Database for data storage. Comply with this purpose, Application Server business logic can be further divided.
1996, invented the language of the former TCL Berkeley Professor John Ousterhout, at the Usenix conference made a keynote speech entitled In 2003, the same as the Berkeley professor Eric Brewer and students, published an article entitled The two UC Berkeley colleagues, fratricide, they are arguing about?
so-called multi-threading, simply, it is composed of a thread, from start to finish to be responsible for a complete business process. Analogy, like garage each master responsible for the repair a car. The so-called event-driven, refers to a complete business process, split into several independent work done by each responsible for one or more threads. Analogy, like a car factory in the pipeline, there are several stations, each station by one or several of the workers responsible.
Apparently, Twitter's approach is event-driven faction. The advantage of event-driven dynamic invocation of resources. When a heavy burden of a job and become the bottleneck of the whole process when the event-driven architecture makes it easy to mobilize additional resources to ease the pressure. For a single machine, the multi-threaded and event-driven two designs, the difference in performance is not very obvious. But for a distributed system, the advantages of event-driven play is even more vividly.
Twitter to do two separate business processes. First, the separation of Mongrel and MySQL database, Mongrel MySQL database does not directly intervene in the operation,
Girls and thieves - Qzone log, but the commission MemCached full responsibility. Second, two separate upload and download logic, the logic between the two queues to pass through the Kestrel control instructions.
in two John Ousterhout and Professor Eric Brewer debate and did not clearly separate the data flow and control flow issues. The so-called events, both control signals, including the data itself. Taking into account the size of the data is usually large, high transport costs, while the control signal, small size, easy transport. The data flow and control flow separation, can further improve system efficiency.
Twitter complete the entire business process, the average time is 500ms, even up to 200-300ms, that Twitter, distributed systems,
(续)一本发黄的书, event-driven design is successful.
Kestrel message queue, is Twitter itself developed. Many open source message queue implementation, Twitter why not readily available free tools, try to bother to do their own R & D?
Reference,
[36] Why threads are a bad idea (for most purposes), 1996.
(
[37] Why events are a bad idea (for high-concurrency servers), 2003.
(
【8】 too, and had not
Beijing Xizhimen overpass design, often being criticized. Objectively speaking, for an overpass, being able to extend in all directions, even if the task was basically completed. Because we criticized, mainly because the route is too complicated.
, of course, standing on the designer point of view, they must be considered from various constraints. But considering that the world overpass abound, each with its own difficulties, however, such as the overpass Xizhimen confusing, really rare. Therefore, the designers of Xizhimen overpass, the difficulty is an objective reality, but the total is still some room for improvement.
Figure 10. overpass route of the Beijing Xizhimen
Courtesy
large site architecture, too, follow the traditional design, worry and effort, but the cost of Web site performance. Poor performance of the site, the user experience is not good. Twitter such a large site has been able to soar, not only the design features to meet the needs of the times, the same time, technical excellence is a necessary guarantee success.
example, from Mongrel to MemCached between the need for a data transmission channel. Or, strictly speaking, require a client library communicating to the memcached server. Twitter engineers, first with the Ruby implementation of a channel. Later, with the C implementation of a faster channel. Subsequently, the continuous improvement of the details, constantly improve the efficiency of data transmission. This series of improvements to make Twitter the speed, no cache from the original when dealing with 3.23 requests per second, to 139.03 requests per second now, see Figure 11. The data channel, now known as the libmemcached, is an open source project [38].
Figure 11. Evolving from a Ruby memcached client to a C client with optimised hashing. These changes increases Twitter performance from 3.23 requests per second without caching, to 139.03 requests per second nowadays [14].
Courtesy
another example, Twitter system using message queues to pass control signals. These control signals from the queue, to be removed, the life cycle is very short. Short life cycle, which means the message queue garbage (Garbage Collection) efficiency, will seriously affect the efficiency of the whole system. Therefore, improved garbage collection mechanism to improve efficiency, become an inevitable problem. Twitter use message queues, the original is not Kestrel, but written with a simple Ruby queue tools. But if we continue to follow the Ruby the language, not much room for performance optimization. Ruby has the advantage of integrating many features, thus significantly reducing the development process in the programming workload. But the advantages are also disadvantages, too many features integrated, multi-drag will affect the situation as a whole, resulting in optimization problems.
Twitter engineer joke, After several attempts, Twitter's engineers have finally given up the Ruby language, use Scala language, self-realization of a queue, called Kestrel [39].
main motivation of language change, Scala runs on the JVM, so the means to optimize the performance of rich Garbage Collection. Figure 12. Shows the use of Kestrel after the delayed garbage collection, in peacetime only 2ms, a maximum of 4ms. Peak periods, the average delay 5ms, a maximum of 35ms.
Figure 12. The latency of Twitter Kestrel garbage collection [14].
Courtesy
RubyOnRails gradually fade out Twitter, it seems that the general trend. The final step is a step climax may be replaced by Mongrel. In fact, Twitter so-called
Twitter 的 Evan Weaver said, The so-called combination of Apache + Mongrel is an implementation RubyOnRails. Apache + Mongrel combination, can handle 139 requests per second, see Figure 11, the In other words, use the combination of Apache + Mongrel, the advantage of reducing the burden of engineers to write programs, but the cost of reduced system performance 4 times, in other words, the length of time users wait an average of 4 times.
alive usually not difficult, always difficult to be wonderful. And had not won, this is a spirit.
Reference,
[14] Improving running component of Twitter.
(
[16] Updating Twitter without service disruptions.
(
[38] Open source project, libmemcached, by Twitter.
(
[39] Open source project, Kestrel Messaging Queue, by Twitter.
(
【9】 Conclusion
Twitter
this series discussed the architecture design, especially the application cache, data flow and control flow of the organization so unique. Them with the flood, the flood, drainage channels, the relative ratio of three methods to facilitate better understanding. See also the results of actual operation to verify whether such a design can meet the pressure encountered in actual operation.
anatomical structure of a real site, there are some difficulties.