Benvenuti!

due.jpg


Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • Vue Laravel CRUD Example Tutorial From Scratch
    Vue Laravel CRUD Example Tutorial From Scratch is today’s leading topic.  Laravel is the PHP’s fastest growing Framework with scalability and flexibility. VueJS is the fastest growing Front end Library in the Javascript community. Laravel is providing VueJS support out of the box.  For this demo, we will not create separate projects for the frontend because Laravel provides the best support for Vue.js. So, we will build the Vue components inside the Laravel’s assets folder. We will create a Single Page Application(SPA) using the Laravel and Vue.js. In this SPA, you can create a post, edit the post, read the post and delete the post. We will create Laravel API Backend. Vue Laravel CRUD Example Tutorial Now, first, we need to install the Laravel application. So let us install new Laravel by the following command. #1: Install Laravel 5.7 laravel new vuelaravelcrud   Now, go inside the project folder and install the frontend dependencies using the following command. npm install Also, open the project in your editor. I am using Visual Studio Code. code . Setup the database configuration inside the .env file. DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=vuecrud DB_USERNAME=root DB_PASSWORD=root Save the file, and now your backend can connect the MySQL database.  Now, start the compiling assets using the following command. npm run dev   We can also run the following command to compile the assets as you write the new code or modify the existing code. npm run watch #2: Install Vue dependency and edit configurations. Type the following command to install the vue-router and vue-axios dependencies. The vue-router is used for routing our Vue.js application to use the different components and the vue-axios for sending the network request to the server. npm install vue-router vue-axios --save Now, we need to change the app.js file inside resources  >> js folder. Modify with the following code inside the app.js file. // App.js require('./bootstrap'); window.Vue = require('vue'); import VueRouter from 'vue-router'; Vue.use(VueRouter); import VueAxios from 'vue-axios'; import axios from 'axios'; Vue.use(VueAxios, axios); Vue.component('example-component', require('./components/ExampleComponent.vue')); const router = new VueRouter({ mode: 'history'}); const app = new Vue(Vue.util.extend({ router })).$mount('#app'); Here, we have configured the vue-router and vue-axios library. Now, inside the resources >> views folder, create one file called the post.blade.php file. Add the following code inside the post.blade.php file. <!doctype html> <html lang="{{ str_replace('_', '-', app()->getLocale()) }}"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Laravel</title> <link href="https://fonts.googleapis.com/css?family=Nunito:200,600" rel="stylesheet" type="text/css"> <link href="/{{ mix('css/app.css') }}" type="text/css" rel="stylesheet" /> <meta name="csrf-token" value="{{ csrf_token() }}" /> </head> <body> <div id="app"> <example-component></example-component> </div> <script src="/{{ mix('js/app.js') }}" type="text/javascript"></script> </body> </html> Now, change the route inside the routes >> web.php file. We are building the Single Page Application using Laravel and Vue. So we need to define the following route, where we can push any route and then it will give the correct route without any error. Otherwise, it will give the 404 error because, in Laravel, you have not defined any route, it is the Vue where you have defined the various component route. Due to the following code, now you can be able to use Laravel route proxy as a Vue route, and you can display the Vue components depending on the current URL. <?php Route::get('/{any}', function () { return view('post'); })->where('any', '.*'); Save the file and go to the browser and see the result. You can see that we have successfully integrated the Vue component in our Laravel application. #3: Create Vue Components Inside the resources >>  js folder, create a folder called components and inside that folder, create four following vue components. HomeComponent.vue CreateComponent.vue EditComponent.vue IndexComponent.vue // HomeComponent.vue <template> <div class="row justify-content-center"> <div class="col-md-8"> <div class="card card-default"> <div class="card-header">Home Component</div> <div class="card-body"> I'm the Home Component component. </div> </div> </div> </div> </template> <script> export default { mounted() { console.log('Component mounted.') } } </script> // CreateComponent.vue <template> <div class="row justify-content-center"> <div class="col-md-8"> <div class="card card-default"> <div class="card-header">Create Component</div> <div class="card-body"> I'm the Create Component component. </div> </div> </div> </div> </template> <script> export default { mounted() { console.log('Component mounted.') } } </script> // EditComponent.vue <template> <div class="row justify-content-center"> <div class="col-md-8"> <div class="card card-default"> <div class="card-header">Edit Component</div> <div class="card-body"> I'm an Edit component. </div> </div> </div> </div> </template> <script> export default { mounted() { console.log('Example Component mounted.') } } </script> // IndexComponent.vue <template> <div class="row justify-content-center"> <div class="col-md-8"> <div class="card card-default"> <div class="card-header">Index Component</div> <div class="card-body"> I'm an Index component. </div> </div> </div> </div> </template> <script> export default { mounted() { console.log('Index Component mounted.') } } </script> #4: Configure the vue-router Inside the app.js file, write the following code. // app.js require('./bootstrap'); window.Vue = require('vue'); import VueRouter from 'vue-router'; Vue.use(VueRouter); import VueAxios from 'vue-axios'; import axios from 'axios'; import App from './App.vue'; Vue.use(VueAxios, axios); import HomeComponent from './components/HomeComponent.vue'; import CreateComponent from './components/CreateComponent.vue'; import IndexComponent from './components/IndexComponent.vue'; import EditComponent from './components/EditComponent.vue'; const routes = [ { name: 'home', path: '/', component: HomeComponent }, { name: 'create', path: '/create', component: CreateComponent }, { name: 'posts', path: '/posts', component: IndexComponent }, { name: 'edit', path: '/edit/:id', component: EditComponent } ]; const router = new VueRouter({ mode: 'history', routes: routes}); const app = new Vue(Vue.util.extend({ router }, App)).$mount('#app'); Here, we have imported the four components and also defined the routes for our application. Then we have created a router object and passed it to our Vue application. Now, create one more vue component inside the resources >> js folder called App.vue and add the following code in it. // App.vue <template> <div class="container"> <div> <transition name="fade"> <router-view></router-view> </transition> </div> </div> </template> <style> .fade-enter-active, .fade-leave-active { transition: opacity .5s } .fade-enter, .fade-leave-active { opacity: 0 } </style> <script> export default{ } </script> Here, we have defined our router- view. It is the directive that will render the component according to current URL. So, if our URL is /create, then it will present the CreateComponent on the webpage. Save the file, and if your npm run watch is not running, then you might need to compile again and then go to the browser and go to these URLs for testing and see if the project is working or not. http://vuelaravelcrud.test/create http://vuelaravelcrud.test/posts http://vuelaravelcrud.test/edit/21 If you are getting the exact component, then you are perfectly fine and if not then check out for the error on the terminal as well as the console panel in the browser. #5: Create a Navigation bar Write the following code inside the App.vue file. I have added the navbar. // App.vue <template> <div class="container"> <nav class="navbar navbar-expand-sm bg-dark navbar-dark"> <ul class="navbar-nav"> <li class="nav-item"> <router-link to="/" class="nav-link">Home</router-link> </li> <li class="nav-item"> <router-link to="/create" class="nav-link">Create Post</router-link> </li> <li class="nav-item"> <router-link to="/posts" class="nav-link">Posts</router-link> </li> </ul> </nav><br /> <transition name="fade"> <router-view></router-view> </transition> </div> </template> <style> .fade-enter-active, .fade-leave-active { transition: opacity .5s } .fade-enter, .fade-leave-active { opacity: 0 } </style> <script> export default{ } </script> Save the file and see the output on the browser.   #6: Create a Form Write the following code inside the CreateComponent.vue file. We will add the Bootstrap Form to create a post. // CreatePost.vue <template> <div> <h1>Create A Post</h1> <form @submit.prevent="addPost"> <div class="row"> <div class="col-md-6"> <div class="form-group"> <label>Post Title:</label> <input type="text" class="form-control" v-model="post.title"> </div> </div> </div> <div class="row"> <div class="col-md-6"> <div class="form-group"> <label>Post Body:</label> <textarea class="form-control" v-model="post.body" rows="5"></textarea> </div> </div> </div><br /> <div class="form-group"> <button class="btn btn-primary">Create</button> </div> </form> </div> </template> <script> export default { data(){ return { post:{} } }, methods: { addPost(){ console.log(this.post); } } } </script> So, we have taken the two fields. Post Title and Post Body. We have made one method called addPost().  So, when a user submits the form, we will get the input inside the addPost() method. From then, we will send a POST request to the Laravel server and to save the data into the database. I am skipping the validation of each field because this article is getting long and long. So we will do it in another post. Save the file and go to this URL: http://vuelaravelcrud.test/create or /create. You can see the form like below.   #7: Create a Laravel Backend The primary purpose of the Laravel Framework in this example is to build a backend API. So, first, we will create a schema for post table. Also, we need a Post model. So let’s create both using the following command. php artisan make:model Post -m Now, write the following schema inside [timestamp]create_posts_table.php file. public function up() { Schema::create('posts', function (Blueprint $table) { $table->increments('id'); $table->string('title'); $table->text('body'); $table->timestamps(); }); } Migrate the database using the following command. php artisan migrate Next, inside the Post.php model, write the following code to prevent the mass assignment exception. <?php // Post.php namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { protected $fillable = ['title', 'body']; } Also, create a controller using the following command. php artisan make:controller PostController We are using Laravel Resource Collection for API development. So let us create a Laravel Resource Collection using the following command. php artisan make:resource PostCollection When building an API, you may need a transformation layer that sits between your Eloquent models and the JSON responses that are returned to your application’s users. Laravel’s resource classes allow you to expressively and quickly transform your models and model collections into JSON. The PostCollection resource is generated inside app >> Http >>Resources >> PostCollection.php file. <?php // PostCollection.php namespace App\Http\Resources; use Illuminate\Http\Resources\Json\ResourceCollection; class PostCollection extends ResourceCollection { /** * Transform the resource collection into an array. * * @param \Illuminate\Http\Request $request * @return array */ public function toArray($request) { return parent::toArray($request); } } #8: Define the CRUD operations First, we define the function that stores the data inside the MySQL database. <?php // PostController.php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Resources\PostCollection; use App\Post; class PostController extends Controller { public function store(Request $request) { $post = new Post([ 'title' => $request->get('title'), 'body' => $request->get('body') ]); $post->save(); return response()->json('success'); } } Now, write the edit, update, index, and delete functions as well. <?php // PostController.php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Resources\PostCollection; use App\Post; class PostController extends Controller { public function store(Request $request) { $post = new Post([ 'title' => $request->get('title'), 'body' => $request->get('body') ]); $post->save(); return response()->json('successfully added'); } public function index() { return new PostCollection(Post::all()); } public function edit($id) { $post = Post::find($id); return response()->json($post); } public function update($id, Request $request) { $post = Post::find($id); $post->update($request->all()); return response()->json('successfully updated'); } public function delete($id) { $post = Post::find($id); $post->delete(); return response()->json('successfully deleted'); } } #9: Define the api routes Now, we need to define the API routes inside the routes >> api.php file. <?php // api.php use Illuminate\Http\Request; Route::middleware('auth:api')->get('/user', function (Request $request) { return $request->user(); }); Route::post('/post/create', 'PostController@store'); Route::get('/post/edit/{id}', 'PostController@edit'); Route::post('/post/update/{id}', 'PostController@update'); Route::delete('/post/delete/{id}', 'PostController@delete'); Route::get('/posts', 'PostController@index'); #10: Use Axios to send a network request Now, we have created the backend. Next step is to send a POST request to the Laravel development server. Write the following code inside the CreateComponent.vue file’s addPost() function. // CreateComponent.vue addPost(){ let uri = 'http://vuelaravelcrud.test/api/post/create'; this.axios.post(uri, this.post).then((response) => { this.$router.push({name: 'posts'}); }); } So, it will post a request to the server’s api point with the data and the server saves the data. Now, display the posts. So let us edit the IndexComponent.vue file. // IndexComponent.vue <template> <div> <h1>Posts</h1> <div class="row"> <div class="col-md-10"></div> <div class="col-md-2"> <router-link :to="{ name: 'create' }" class="btn btn-primary">Create Post</router-link> </div> </div><br /> <table class="table table-hover"> <thead> <tr> <th>ID</th> <th>Item Name</th> <th>Item Price</th> <th>Actions</th> </tr> </thead> <tbody> <tr v-for="post in posts" :key="post.id"> <td>{{ post.id }}</td> <td>{{ post.title }}</td> <td>{{ post.body }}</td> <td><router-link :to="{name: 'edit', params: { id: post.id }}" class="btn btn-primary">Edit</router-link></td> <td><button class="btn btn-danger">Delete</button></td> </tr> </tbody> </table> </div> </template> <script> export default { data() { return { posts: [] } }, created() { let uri = 'http://vuelaravelcrud.test/api/posts'; this.axios.get(uri).then(response => { this.posts = response.data.data; }); } } </script> Here, when the component is created, we send the network request to fetch the data. Now, here, we need to write the data because axios is by default wrap our backend data posts with data and we are using the Resource Collection. So it will also wrap the data in the data key. So to get the actual data, we need to write like response.data.data. If we have not used axios then we might write the response.data, but axios also has backend data template like response.data, and that is why we need to write the response.data.data. #11: Send edit and update request Now, when our edit component loads, we need to fetch the data from the server to display the existing data. Then, after changing the data in the textbox and textarea, we hit the update button and we call the updatePost() function to send a post request to the server to update the data. // EditComponent.vue <template> <div> <h1>Edit Post</h1> <form @submit.prevent="updatePost"> <div class="row"> <div class="col-md-6"> <div class="form-group"> <label>Post Title:</label> <input type="text" class="form-control" v-model="post.title"> </div> </div> </div> <div class="row"> <div class="col-md-6"> <div class="form-group"> <label>Post Body:</label> <textarea class="form-control" v-model="post.body" rows="5"></textarea> </div> </div> </div><br /> <div class="form-group"> <button class="btn btn-primary">Update</button> </div> </form> </div> </template> <script> export default { data() { return { post: {} } }, created() { let uri = `http://vuelaravelcrud.test/api/post/edit/${this.$route.params.id}`; this.axios.get(uri).then((response) => { this.post = response.data; }); }, methods: { updatePost() { let uri = `http://vuelaravelcrud.test/api/post/update/${this.$route.params.id}`; this.axios.post(uri, this.post).then((response) => { this.$router.push({name: 'posts'}); }); } } } </script> Now, you can try to edit the data and update the form, and you can see that we can now update the data. #12: Delete the data Now, the only remaining thing is to delete or remove the data from the database. So, let us write the final code inside the IndexComponent.vue file. // IndexComponent.vue <template> <div> <h1>Posts</h1> <div class="row"> <div class="col-md-10"></div> <div class="col-md-2"> <router-link :to="{ name: 'create' }" class="btn btn-primary">Create Post</router-link> </div> </div><br /> <table class="table table-hover"> <thead> <tr> <th>ID</th> <th>Item Name</th> <th>Item Price</th> <th>Actions</th> </tr> </thead> <tbody> <tr v-for="post in posts" :key="post.id"> <td>{{ post.id }}</td> <td>{{ post.title }}</td> <td>{{ post.body }}</td> <td><router-link :to="{name: 'edit', params: { id: post.id }}" class="btn btn-primary">Edit</router-link></td> <td><button class="btn btn-danger" @click.prevent="deletePost(post.id)">Delete</button></td> </tr> </tbody> </table> </div> </template> <script> export default { data() { return { posts: [] } }, created() { let uri = 'http://vuelaravelcrud.test/api/posts'; this.axios.get(uri).then(response => { this.posts = response.data.data; }); }, methods: { deletePost(id) { let uri = `http://vuelaravelcrud.test/api/post/delete/${id}`; this.axios.delete(uri).then(response => { this.posts.splice(this.posts.indexOf(id), 1); }); } } } </script> So, here, we have taken the click event in vue to grab the id of the current post and send that id to remove the data from the database as well as remove the data from the array at the client side using the splice function as well. Finally, Vue Laravel CRUD Example Tutorial From Scratch is over. Thanks for taking. You can use the following Github Code for reference. Github Code The post Vue Laravel CRUD Example Tutorial From Scratch appeared first on AppDividend.

  • Visualizing the MySQL Bug Tide (2018 edition)
    I've updated the bug tide graph I made in 2016 with today's data. The source code and details are here.

  • Migrating to Amazon Aurora: Optimize for Binary Log Replication
    In this Checklist for Success series, we will discuss reducing unknowns when hosting in the cloud using and migrating to Amazon Aurora. These tips might also apply to other database as a service (DBaaS) offerings. In our previous article, we discussed the importance of continuous query performance analysis, especially in Amazon Aurora where there is less diagnostic visibility compared to running on EC2 or on-premise. Aside from uptime though, we need a lot more from our data, and we definitely cannot isolate it in Aurora. Next on our checklist is that at one point or another, we will need to use asynchronous replication. Amazon Aurora has an excellent reputation for absorbing intense amounts of writes, and for many cases where you need an asynchronous replica, any replica can have potential issues catching up. Different Clusters for Different Workloads Critical workloads and datasets cannot rely on a single copy of their data. With Amazon Aurora, predictable performance means avoiding mixing workloads within your production cluster. While read heavy workloads might fit easily into read-replicas, reporting or analytics workloads might not be a good fit to execute on your main cluster where read-what-you-write profiles are normally found. You can either delegate this on a separate asynchronous replica as separate Amazon Aurora cluster, or another that runs on an EC2 instance (as an example). This is also true if, say, your analytics or reporting workload generate a significant amount of disk IOPs. Amazon Aurora IO bills operations per million. You might save some money running disk-heavy analytics operations on a replica running on an i3 instance with a local NVMe for example. Similarly, running an async replica on an EC2 instance or on-premise allows you to take your own independent backups, or just an extra level of redundancy. Multi-Threaded Replication It is a known fact that MySQL asynchronous replication performance is subject to some limitations. The biggest one is that, by default, it is single-threaded. MySQL 5.6 introduced multi-threaded replication at one database per thread. This did not apply to the majority of use cases, as workloads vary per database and therefore create an imbalance. With MySQL 5.7 (Aurora 2.0), there have been additional improvements such as an alternative algorithm in parallelizing thread execution that depends on certain behaviors regarding how acting primary servers write binary log entries. With that said, certain multi-threaded replication variables (transaction_write_set_extraction) require that the binlog format is set to ROW. This might sound counter-intuitive because the ROW binlog format actually can increase the replication workload. While ROW format reduces the ambiguity from potentially non-deterministic statements that could cause a replica to drift and become inconsistent, critical operations (schema changes) and optimizations (MTS) requires that you use ROW binlog format. It should be apparent by now that the only reasonable path forward to improving asynchronous replication is via the multi-threaded approach. Along with that, there is the need for ROW binlog format. Any design effort should always include this fact if async replication lag is considered a risk. For the basics, configuration options like slave_compressed_protocol  and binlog_row_image can reduce network churn. In the deep end, reducing dataset hotspots, ensuring tables have PRIMARY KEYs and embracing multi-threaded optimization can also go a long way. Additional Benefits While running certain read-heavy queries on an async replica, or ensuring you have access to physical datafiles are common use cases, being able to switch to another location (region) or just simply another cluster might also be necessary for some instances. Adding or dropping a column/index on a large table can be achieved with either pt-online-schema-change or gh-ost. But for cases where time is a constraint, applying schema changes on an asynchronous cluster and switching to that sometimes pays for itself. Configuration changes or upgrades that require a cluster restart can take seconds or even minutes. Wouldn’t it be nice if you already had a cluster with all these changes ready to take over at a moment’s notice? Or even fail back to the original if there was an unforeseen issue? Stay “tuned” for part three. Meanwhile, we’d like to hear your success stories in Amazon Aurora in the comments below! Don’t forget to come by and see us at AWS re:Invent, November 26-30, 2018 in booth 1605! Percona CEO Peter Zaitsev will deliver a keynote on MySQL High Availability & Disaster Recovery, Tuesday, November 27 at 1:45 PM – 2:45 PM in the Bellagio Hotel, Level 1, Gauguin 2

  • MySQL: when will the password of my users expire ?
    Has you may already know, in MySQL it is possible to set an expiration period for passwords. It’s not always obvious to see when the password will expire. And most of the time if you don’t pay much attention you will get something like this: ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement. Let me share you a small query that check for how long your password is still valid in case it was created with an expiration delay or if the global default_password_lifetime is set on the server (from MySQL 5.7.4 to 5.7.10 it was set to 360 by default)! select user, password_last_changed, concat( cast( IFNULL(password_lifetime, @@default_password_lifetime) as signed) + cast(datediff(password_last_changed, now()) as signed), " days") expires_in from mysql.user where cast( IFNULL(password_lifetime, @@default_password_lifetime) as signed) + cast(datediff(password_last_changed, now()) as signed) >= 0 and user not like 'mysql.%'; Let’s see the result when default_password_lifetime is set to 0 (actual default): +-------+-----------------------+------------+ | user | password_last_changed | expires_in | +-------+-----------------------+------------+ | test | 2018-11-16 13:12:11 | 2 days | | test2 | 2018-11-10 13:16:44 | 4 days | +-------+-----------------------+------------+ Now let’s change that global variable: mysql> set global default_password_lifetime=20; +-------+-----------------------+------------+ | user | password_last_changed | expires_in | +-------+-----------------------+------------+ | fred | 2018-11-12 21:59:56 | 16 days | | test | 2018-11-16 13:12:11 | 2 days | | test2 | 2018-11-10 13:16:44 | 4 days | +-------+-----------------------+------------+ That was the easy useful query of the day

  • How Not to do MySQL High Availability: Geographic Node Distribution with Galera-Based Replication Misuse
    Let’s talk about MySQL high availability (HA) and synchronous replication once more. It’s part of a longer series on some high availability reference architecture solutions over geographically distributed areas. Part 1: Reference Architecture(s) for High Availability Solutions in Geographic Distributed Scenarios: Why Should I Care? Part 2: MySQL High Availability On-Premises: A Geographically Distributed Scenario The Problem A question I often get from customers is: How do I achieve high availability in case if I need to spread my data in different, distant locations? Can I use Percona XtraDB Cluster? Percona XtraDB Cluster (PXC), mariadb-cluster or MySQL-Galera are very stable and well-known solutions to improve MySQL high availability using an approach based on multi-master data-centric synchronous data replication model. Which means that each data-node composing the cluster MUST see the same data, at a given moment in time. Information/transactions must be stored and visible synchronously on all the nodes at a given time. This is defined as a tightly coupled database cluster. This level of consistency comes with a price, which is that nodes must physically reside close to each other and cannot be geographically diverse. This is by design (in all synchronous replication mechanisms). This also has to be clarified over and over throughout the years. Despite that we still see installations that span across geographic locations, including AWS Regions. We still see some solutions breaking the golden rule of proximity, and trying to break the rules of physics as well. The problem/mistake is not different for solutions based on-premises or in the cloud (for whatever cloud provider). Recently I had to design a couple of customer solutions based on remote geographic locations. In both cases, the customer was misled by an incorrect understanding of how the synchronous solution works, and from a lack of understanding of the network layer. I decided I need to cover this topic again, as I have done previously in Galera geographic replication and Effective way to check network connection in a geographically distributed environment  What Happen When I Put Things on the Network? Well, let’s start with the basics. While light travels at 300 million meters per second, the propagation of the electric fields or electric signaling is slower than that. The real speed depends by the medium used to transmit it. But it can be said that the real speed normally spans from 0% to 99% of light-speed (depending on the transmission medium). This means that in optimal conditions the signal travels at approximately 299.72Km per millisecond, in good/mid condition about half that at 149.86Km per millisecond, and in bad conditions it could be 3Km per millisecond or less. To help you understand, the distance between Rome (Italy) and Mountain View (California) is about 10,062Km. At light-speed it will take 33.54ms. In good conditions (90% of light-speed) the signal will take 37.26ms to reach Mountain View, and in less optimal conditions it can easily double to 74.53 ms. Keep in mind this is the electric field propagation speed: optimal conditions with no interruption, re-routing and so on. Reality will bring all the kind of interruptions, repeaters and routing. All the physics above works as a baseline. On top of this, each human construct adds functionalities, flexibility and (unfortunately) overhead – leading to longer times and slower speeds. The final speed will be different than the simple propagation of the electric fields. It will include the transmission time of complex signaling using ICMP protocol, or even higher delays with the use of a very complex protocol like TCP/IP, which includes handshaking, package rerouting, re-sending and so on. On top of that, when sending things over the internet, we need to realize that it is very improbable we will be the only user sending data over that physical channel. As such, whatever we have “on the road” will need to face bandwidth limitation, traffic congestion and so on. I had described the difference between protocols (ICMP – TCP/IP) here, clarifying how the TCP/IP scenario is very different from using different protocols like ICMP, or the theoretical approach. What it all means is that we cannot trust the theoretical performance. We must move to a more empirical approach. But we must understand the right empirical approach or we will be misled. An Example I recently worked on a case where a customer had two data centers (DC) at a distance of approximately 400Km, connected with “fiber channel”. Server1 and Server2 were hosted in the same DC, while Server3 was in the secondary DC. Their ping, with default dimension, to Server3 was ~3ms. Not bad at all, right? We decided to perform some serious tests, running multiple sets of tests with netperf for many days collecting data. We also used the data to perform additional fine tuning on the TCP/IP layer AND at the network provider. The results produced a common (for me) scenario (not so common for them):   The red line is the first set of tests BEFORE we optimized. The yellow line is the results after we optimized. The above graph reports the number of transactions/sec (AVG) we could run against the different dimension of the dataset and changing the destination server. The full roundtrip was calculated. It is interesting to note that while the absolute numbers were better in the second (yellow) tests, this was true only for a limited dataset dimension. The larger the dataset, the higher the impact. This makes sense if you understand how the TCP/IP stack works (the article I mentioned above explains it). But what surprised them were the numbers. Keeping aside the extreme cases and focusing instead on the intermediate case, we saw that shifting from a 48k dataset dimension to 512K hugely dropped the performance. The drop for executed transactions was from 2299 to 219 on Server2 (same dc) and from 1472 to 167 Server3 (different DC). Also, note that Server3 only managed ~35% fewer transactions comparing to Server2 from the start given the latency. Latency moved from a more than decent 2.61ms to 27.39ms for Server2 and 4.27ms to 37.25ms for Server3. 37ms latency is not very high. If that had been the top limit, it would have worked. But it was not. In the presence of the optimized channel, with fiber and so on, when the tests were hitting heavy traffic, the congestion was such to compromise the data transmitted. It hit a latency >200ms for Server3. Note those were spikes, but if you are in the presence of a tightly coupled database cluster, those events can become failures in applying the data and can create a lot of instability. Let me recap a second the situation for Server3: We had two datacenters. The connection between the two was with fiber Distance Km ~400, but now we MUST consider the distance to go and come back. This because in case of real communication, we have not only the send, but also the receive packages. Theoretical time at light-speed =2.66ms (2 ways) Ping = 3.10ms (signal traveling at ~80% of the light speed) as if the signal had traveled ~930Km (full roundtrip 800 Km) TCP/IP best at 48K = 4.27ms (~62% light speed) as if the signal had traveled ~1,281km TCP/IP best at 512K =37.25ms (~2.6% light speed) as if the signal had traveled ~11,175km Given the above, we have from ~20%-~40% to ~97% loss from the theoretical transmission rate. Keep in mind that when moving from a simple signal to a more heavy and concurrent transmission, we also have to deal with the bandwidth limitation. This adds additional cost. All in only 400Km distance. This is not all. Within the 400km we were also dealing with data congestions, and in some cases the tests failed to provide the level of accuracy we required due to transmission failures and too many packages retry. For comparison, consider Server2 which is in the same DC of Server1. Let see: Ping = 0.027ms that is as if the signal had traveled ~11km light-speed TCP/IP best at 48K = 2.61ms as if traveled for ~783km TCP/IP best at 512K =27.39ms as if traveled for ~8,217km We had performance loss, but the congestion issue and accuracy failures did not happen. You might say, “But this is just a single case, Marco, you cannot generalize from this behavior!” You would be right IF that were true (but is not). The fact is, I have done this level of checks many times and in many different environments. On-premises or using the cloud. Actually, in the cloud (AWS), I had even more instability. The behavior stays the same. Please test it yourself (it is not difficult to use netperf). Just do the right tests with RTT and multiple requests (note at the end of the article). Anyhow, what I know from the tests is that when working INSIDE a DC with some significant overhead due to the TCP/IP stack (and maybe wrong cabling), I do not encounter the same congestion or bandwidth limits I have when dealing with an external DC. This allows me to have more predictable behavior and tune the cluster accordingly. Tuning that I cannot do to cover the transmission to Server3 because of unpredictable packages behavior and spikes. >200ms is too high and can cause delivery failures. If we apply the given knowledge to the virtually-synchronous replication we have with Galera (Percona XtraDB Cluster), we can identify that we are hitting the problems well-explained in Jay’s article Is Synchronous Replication right for your app? There, he explains Callaghan’s Law: [In a Galera cluster] a given row can’t be modified more than once per RTT.  On top of that, when talking of geographical disperse solutions we have the TCP/IP magnifying the effect at writeset transmission/latency level. This causes nodes NOT residing on the same physical contiguous network delay for all the certification-commit phases for an X amount of time. When X is predictable, it may range between 80% – 3% of the light speed for the given distance. But you can’t predict the transmission-time of a set of data split into several datagrams, then sent on the internet, when using TCP/IP. So we cannot use the X range as a trustable measure. The effect is unpredictable delay, and this is read as a network issue from Galera. The node can be evicted from the cluster. Which is exactly what happens, and what we experience when dealing with some “BAD” unspecified network issue. This means that whenever we need to use a solution based on tightly coupled database cluster (like PXC), we cannot locate our nodes at a distance that is longer than the largest RTT time of our shortest desired period of commit. If our application must apply the data in a maximum of 200ms in one of its functions, our min RTT is 2ms and our max RTT is 250ms. We cannot use this solution, period. To be clear, locating a node on another geolocation, and as such use the internet to transmit/receive data, is by default a NO GO given the unpredictability of that network link. I doubt that nowadays we have many applications that can wait an unpredictable period to commit their data. The only case when having a node geographically distributed is acceptable is if you accept commits happening in undefined periods of time and with possible failures. What Is the Right Thing To Do? The right solution is easier than the wrong one, and there are already tools in place to make it work efficiently. Say you need to define your HA solution between the East and West Coast, or between Paris and Frankfurt. First of all, identify the real capacity of your network in each DC. Then build a tightly coupled database cluster in location A and another tightly coupled database cluster in the other location B. Then link them using ASYNCHRONOUS replication. Finally, use a tool like Replication Manager for Percona XtraDB Cluster to automatically manage asynchronous replication failover between nodes. On top of all of that use a tool like ProxySQL to manage the application requests. The full architecture is described here. Conclusions The myth of using ANY solution based on tightly coupled database cluster on distributed geographic locations is just that: a myth. It is conceptually wrong and practically dangerous. It MIGHT work when you set it up, it MIGHT work when you test it, it MIGHT even work for some time in production. By definition, it will break, and it will break when it is least convenient. It will break in an unpredictable moment, but because of a predictable reason. You did the wrong thing by following a myth. Whenever you need to distribute your data over different geographic locations, and you cannot rely on a single physical channel (fiber) to connect the two locations, use asynchronous replication, period! References https://github.com/y-trudeau/Mysql-tools/tree/master/PXC http://www.tusacentral.net/joomla/index.php/mysql-blogs/164-effective-way-to-check-the-network-connection-when-in-need-of-a-geographic-distribution-replication-.html https://www.percona.com/blog/2013/05/14/is-synchronous-replication-right-for-your-app/ Sample test #!/bin/bash test_log=/tmp/results_$(date +'%Y-%m-%d_%H_%M_%S').txt exec 9>>"$test_log" exec 2>&9 exec 1>&9 echo "$(date +'%Y-%m-%d_%H_%M_%S')" >&9 for ip in 11 12 13; do echo " ==== Processing server 10.0.0.$ip === " for size in 1 48 512 1024 4096;do echo " --- PING ---" ping -M do -c 5 10.0.0.$ip -s $size echo " ---- Record Size $size ---- " netperf -H 10.0.0.$ip -4 -p 3307 -I 95,10 -i 3,3 -j -a 4096 -A 4096 -P 1 -v 2 -l 20 -t TCP_RR -- -b 5 -r ${size}K,48K -s 1M -S 1M echo " ---- ================= ---- "; done echo " ==== ----------------- === "; done  

Estate

estate1.jpg

Calendario impegni

November
S M T W T F S
28 29 30 31 1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 1