![]() ![]() We also planned on using Cassandra to restore the board in case of a redis failure. We still needed to store the full details in Cassandra so that users could inspect individual tiles to see who placed them and when. We were able to update individual tiles by updating the value of the bitfield at a specific offset (no need for locking or read/modify/write). We could read the entire board state by reading the entire bitfield. Each 4 bit integer was able to encode a 4 bit color, and the x,y coordinates were determined by the offset (offset = x + 1000y) within the bitfield. ![]() We used a bitfield of 1 million 4 bit integers. Our next approach was to store the full board in redis. On our production cluster this read took up to 30 seconds, which was unacceptably slow and could have put excessive strain on Cassandra. (x, y): īecause the board contained 1 million tiles this meant that we had to read a row with 1 million columns. The format for each column in the row was: Our initial approach was to store the full board in a single row in Cassandra and each request for the full board would read that entire row. All subsequent tile placements could be drawn to the board immediately as they were received.įor this scheme to work we needed the request for the full state of the board to be as fast as possible. When the client received the full board it replayed all the real-time placements it received while waiting. The full board in the response could be a few seconds stale as long as we also had real-time placements starting from before it was generated. Our solution was to initialize the client state by having it listen for real-time tile placements immediately and then make a request for the full board. ![]() The main challenge for the backend was keeping all the clients in sync with the state of the board.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |