Tag: 上海419论坛香草社区


Misick Hanchell question allegations going global


first_img Unknown witnesses under dispute in SIPT pre-trial proceedings Recommended for you Related Items:Gordon Kerr, Helen garlick, mcallister hanchell, mike misick, tomothy o’sullivan Facebook Twitter Google+LinkedInPinterestWhatsApp SIPT Wraps up, Court on Break til February 15 Facebook Twitter Google+LinkedInPinterestWhatsAppProvidenciales, 01 Feb 2016 – Defendants in the SIPT Government corruption trial want to know who is paying for the website and so called global media campaign which is daily being updated with the Opening Statement presented by Andrew Mitchell. In a media release, the question is asked if the tax payers are footing this bill… the main queries come from Michael Misick, the former Premier and McCallister Hanchell, the former Minister for Lands, who are both on trial. Mr. Misick: “I’d like to know how much this is costing, which taxpayers are paying for it and who is running this operation to make unproven allegations against the defendants in the press?”It is true, this is the most information the media has had since Helen Garlick was hired by the UK for this case; in the trials against Gordon Kerr and Timothy O’Sullivan, attorneys practicing in the TCI, the evidence presented in court was barred from being reported; citing potential reputational damage for the pair. Hanchell added, “We all want a fair trial, but, when we don’t have access to a jury and the state uses taxpayer money to fund a PR blitz against us, you wonder if justice will really be served?” Nine are on a trial to answer the Special Investigation charges of corruption. Misick and Hanchell say the ‘mystery group’ behind the website should be outted. Man cautioned for threatening Mike Misick; Mistrial Motion comes todaylast_img read more


City Council sides with unions backs effort to strike Prop B from


first_img June 10, 2019 Updated: 10:18 PM I look forward to supporting those willing to continue to fight to keep pension reform the law in San Diego. 4/4 cc— Chris Cate (@chrisjcate) June 10, 2019 KUSI Newsroom, 00:00 00:00 spaceplay / pause qunload | stop ffullscreenshift + ←→slower / faster ↑↓volume mmute ←→seek  . seek to previous 12… 6 seek to 10%, 20% … 60% XColor SettingsAaAaAaAaTextBackgroundOpacity SettingsTextOpaqueSemi-TransparentBackgroundSemi-TransparentOpaqueTransparentFont SettingsSize||TypeSerif MonospaceSerifSans Serif MonospaceSans SerifCasualCursiveSmallCapsResetSave SettingsSAN DIEGO (KUSI) – The San Diego City Council voted 6-3 in closed session Monday to join a coalition of four local labor unions in their effort to strip 2012’s Proposition B pension reform initiative from the city charter.The group of labor unions, headed by the Municipal Employees Association, which represents city workers, intend to begin a quo warranto process to have the initiative struck from the charter. Because the council cannot override the voters’ approval of Proposition B, only a court or a second vote on an upcoming ballot can remove the initiative.The process is likely to begin at a lower court level, assuming an approval by state Attorney General Xavier Becerra, who must sign off on the city and unions’ quo warranto request. The initiative’s backers, which include Mayor Kevin Faulconer, could also continue the legal fight to keep it in the charter, launching another lengthy process of appeals up through the court system.The state Supreme Court ruled last year that Proposition B was placed on the ballot by then-Mayor Jerry Sanders and not simply a coalition of voters, making it subject to state labor laws. Sanders violated state law by avoiding negotiations with local unions while drafting the initiative, which is required under the 1968 Meyers-Milias-Brown Act when an initiative affects the benefits of union workers, the state’s high court found.Proposition B’s supporters, including the city at that time, filed an appeal with the U.S. Supreme Court, which declined to take the case, leaving the state court’s ruling in place.“Yes, 150,000 voters said `yes’ on Prop. B and another 80,000 said `no,’ so you have 234,000 people who said anything about Prop. B,” MEA attorney Ann Smith told the council. “You, on the other hand, have 1.4 million residents in this city to serve and you have to care about what is in their best interest; that’s your job.”A state appellate court also ruled earlier this year that the city is required to give back pay to the roughly 4,000 city employees who have been hired since 2012 and would have otherwise received a pension. However, that cannot be done until the initiative’s rules governing pensions are off the books.About two-thirds of San Diego voters approved Proposition B in 2012. Then-City Council members Carl DeMaio and Faulconer backed the initiative and have continued to do so despite the state Supreme Court ruling. Supporters have argued the initiative is not subject to the Meyers-Milias-Brown Act because it is not applicable to citizens’ initiatives.“The council’s vote today to invalidate Prop. B goes far beyond what any court — including the Supreme Court — has ordered the city to do,” City Councilman Mark Kersey said in a Twitter post. “We have the ability to make whole the affected employees without overturning the will of a near super- majority of voters in 2012.”The six Democrats on the technically nonpartisan council voted to back the unions in the quo warranto request while Republican City Council members Chris Cate and Scott Sherman and Kersey, an independent, voted against it. Cities up and down the state are grappling with unsustainable pension benefits, having to either cut benefits for retirees or increase taxes on their residents to make payments. 2/4 cc— Chris Cate (@chrisjcate) June 10, 2019center_img City Council sides with unions, backs effort to strike Prop. B from charter Posted: June 10, 2019 KUSI Newsroom Categories: Local San Diego News FacebookTwitterlast_img

Google Duplex arriving on iPhones more Android devices


first_img Now playing: Watch this: 1:06 Mobile Tags Get more from your appliances with a smart plug 0center_img Google Share your voice Google Duplex can make appointments for you.  Getty Images Google Duplex, the voice assistant that sounds remarkably human, is making its way to more devices. The new Google Assistant feature for booking restaurant reservations is rolling out for more Android and iOS devices in the US, a Google spokesperson confirmed Wednesday.The support page for Duplex, spotted earlier by 9to5Google, says Android devices running version 5.0 or later and iPhones with Google Assistant installed can use Duplex, but not everywhere. XDA Developers reported Wednesday  it was able to use Duplex on a Samsung Galaxy S10 Plus.Previously, Duplex was available only on Google’s Pixel phones. In March, Google said Duplex was expanding to Pixel owners in 43 states.Google Duplex was announced last May and boasted the ability to make calls on the user’s behalf. Instead of a robotic voice, Duplex has a voice that sounds human and uses “complex sentences, fast speech and long remarks.” The assistant can schedule appointments, make reservations and get information from businesses.To use Duplex, you need only ask Google Assistant to perform a task like booking a table or scheduling an appointment.Originally published April 3, 10:08 a.m. PT.Update, 10:32 a.m.: Adds confirmation from Google. Post a commentlast_img read more


Bullethit Rohingya man dies at Ctg medical


first_imgInjured Rohingya dies at CMCHOne of the two Rohingyas, who were shot injured on Bangladesh-Myanmar border in Teknaf upazila of Cox’s Bazar, succumbed to his injuries at Chittagong Medical College Hospital (CMCH) on Saturday, reports UNB.The deceased Mohammad Musa, 22, son of Ismail, was from Maungdaw area in Rakhine State of Myanmar.CMCH police outpost in-charge sub-inspector Jahirul Islam said that two injured Rohingyas were brought to the hospital in the morning.“One of them succumbed to his injuries after one hour of admission,” he added.The condition of another Rohingya — Md Mokter, 27, son of Gul Mohammad Sheikh — who is undergoing treatment at the hospital, is critical, he added.Deceased Musa’s cousin, Kawser, said bullet-injured Musa and Mokter entered Bangladesh though sea route at around 2:00 am on Friday following fresh tensions in Rakhaine State of Myanmar.They sustained bullet injuries in Rakhine State around 3:00am on Thursday.last_img read more


Sweden arrests man for terrorist crime after truck attack


first_imgDamage to a store is revealed after the stolen truck, which was driven through a crowd outside a department in Stockholm. Photo: AFPSweden early Saturday arrested a man for a “terrorist crime” hours after a beer truck ploughed into a crowd outside a busy department store in central Stockholm, killing four and injuring 15.The man was arrested “on suspicion of a terrorist crime through murder,” Karin Rosander, a communications director at the Swedish Prosecution Authority, told AFP.Police said earlier on Friday after the attack that they had detained the man who “matched the description” of a photo released of a suspect wearing a dark hoodie and military green jacket.But they did not confirm if he drove the truck.According to the Aftonbladet newspaper, the same man is a 39-year-old of Uzbek origin and a supporter of the Islamic State (IS) group.If confirmed as a terror attack, it would be Sweden’s first such deadly assault. The 15 injured included children and nine people were “seriously” wounded, health authorities said.Prime Minister Stefan Lofven said he had strengthened the country’s border controls.“Terrorists want us to be afraid, want us to change our behaviour, want us to not live our lives normally, but that is what we’re going to do. So terrorists can never defeat Sweden, never,” he said.The attack occurred just before 3:00 pm (1300 GMT) when the stolen truck slammed into the corner of the bustling Ahlens store and the popular pedestrian street Drottninggatan, above ground from Stockholm’s central subway station.Pictures taken at the scene showed a large blue beer truck with a mangled undercarriage smashed into the Ahlens department store.Witnesses described scenes of terror and panic.“A massive truck starts driving … and mangles everything and just drives over exactly everything,” eyewitness Rikard Gauffin told AFP.“It was so terrible and there were bodies lying everywhere… it was really terrifying,” he added.The truck was towed away in the early hours of Saturday.TrappedPolice cars and ambulances rapidly flooded the scene after the attack, as central streets and squares were blocked off amid fears that another attack could be imminent.Helicopters hovered overhead across the city, sirens wailed, and police vans criss-crossed the streets using loudspeakers to urge people to head straight home and avoid crowded places.But with the metro system and commuter trains shut down for several hours after the attack, other streets heading out of the city were packed with thousands of pedestrians trying to find a way home.Haval, a 30-year-old sales clerk who didn’t want to reveal his last name, was in the metro at the time of the attack.His train stopped immediately and he had to get out, along with all the other passengers.They walked along the street before being ushered inside a nearby hotel for safety.“We were suddenly trapped inside a hotel and there was the worst kind of horror in there,” he told AFP.“We were scared, we were scared something else would happen, he added.‘Bleeding to death’Marko was in coffee shop near the scene with his girlfriend when he saw the truck ram into the store.“He hit a woman first, then he drove over a bunch of other people … We took care of everyone lying on the ground,” he told Swedish daily newspaper Aftonbladet.Hasan Sidi, another passerby, told Aftonbladet he saw two elderly women lying on the ground.He said people at the scene urged him to help one of the women who was “bleeding to death”.“One of them died… I don’t know if the other one made it,” Sidi said.“The police were shocked. Everyone was shocked.”‘You can’t break us’In an editorial, Sweden’s biggest broadsheet Dagens Nyheter wrote: “What we feared for a long time finally happened.”“The fear and panic right after the incident was inevitable. The images from the attack were terrible,” the paper said.But Stockholm managed to stay “cool-headed” even though the attacker struck “Sweden and Stockholm’s heart”, it added.Friday’s attack was the latest in a string of similar assaults with vehicles in Europe, including in London, Berlin and the southern French city of Nice.The deadliest came last year in France on the July 14 Bastille Day national holiday, when a man rammed a truck into a crowd in the Mediterranean resort of Nice, killing 86 people.Last month, Khalid Masood, a 52-year-old convert to Islam known to British security services, killed five people when he drove a car at high speed into pedestrians on London’s Westminster Bridge before launching a frenzied knife attack on a policeman guarding the parliament building.And in December, a man hijacked a truck and slammed into shoppers at a Christmas market in Berlin, killing 12 people.In 2014, IS called for attacks on citizens of Western countries and gave instructions on how they could be carried out without military equipment, using rocks or knives, or by running people over in vehicles.last_img read more


Sexual harassment case against Sirajuddoula shifted to tribunal


first_imgSM SirajuddoulaThe case filed against SM Sirajuddoula, principal of Sonagazi Islamia Senior Fazil Madrasa for sexually harassing his student Nusrat Jahan Rafi before burning her to death, was shifted to Women and Children Repression Prevention Tribunal on Thursday.Feni senior judicial magistrate Zakir Hossain shifted the case fixing 9 July for the next hearing.On Wednesday, the court took cognisance of charges brought against SM Sirajuddoula after Police Bureau of Investigation (PBI) submitted the charge-sheet.The PBI submitted the 271-page charge-sheet 98 days into the sexual harassment of the madrasa girl. A total of 29 people were made witnesses in the case.Meanwhile, night guard M Mostafa of the madrasa testified before the Women and Children Repression Prevention Tribunal on Thursday in murder case filed over the killing of Nusrat.Tribunal judge Mamunur Rashid fixed 7 July for the next hearing.On 27 March, Sirajuddoula sexually harassed Nusrat in his office. Her mother filed the case against the principal the following day.On 6 April, Nusrat was set afire at an examination centre allegedly by people loyal to the principal after he was arrested and subsequently suspended following the filing of the case.She lost her battle for life on 10 April at Dhaka Medical College Hospital in the capital.last_img read more


Celebrating different cultures through art


first_imgWith the world shrinking fast, cultural boundaries are vanishing and festivals have become global! An online art exhibition When worlds collide is a nine-day long display of this blending of cultures. Organised by Touchtalent.com, an online community for art and creativity, the exhibition will focus on the festivals from different regions like Diwali and Halloween. This exhibition would be live from 23 to 31October.Ankit Prasad, co-founder and  CEO? of Touchtalent.com said, ‘This is an impressive opportunity for creative users from across the globe ?to be part of the? exhibition and celebrate world’s two important festivals?.’ Also Read – ‘Playing Jojo was emotionally exhausting’Touchtalent.com is a community of creative individuals to share appreciate and monetize creativity. Touchtalent has a social reach of 60 millions in over 192 countries. Everyday users from more than 100 countries visit Touchtalent to showcase their creative content.One sees a lot of creativity around these two festivals. In India people love to create and gift paintings, decorate their houses with rangoli etc, while in the west people dress creatively on the occasion of Halloween. The exhibition is an ingenious way of appreciating the diversity of faith. They are looking for entries that can celebrate the colourful and assorted celebrations all over the world.Artists can upload their creative work on www.touchtalent.com with hashtags #Festivals #Halloween or #Diwali before 22 October’. All selected artworks would be displayed in the exhibition.last_img read more


CAT 2018 to be held on Nov 25


first_imgKolkata: The Indian Institutes of Managements (IIMs) will release the notification for the Common Admission Test 2018 (CAT) on Sunday. The registration for CAT 2018 will start from August 8 until September 19.According to Prof Sumanta Basu, convener, CAT 2018, the examination will be conducted on November 25, 2018 (Sunday) in two sessions. The test centres will be spread across 147 cities.Candidates will be given the option to select four test cities in order of preference. Cities and centres will be assigned to the candidates only after the last date of CAT 2018 registration. Hence, candidates need not rush to block slots and cities in the initial days of registration. Also Read – Rain batters Kolkata, cripples normal lifeThe authorities will try their best to assign candidates to their first preferred city. In case it is not possible, they will be assigned a city following their given order of preference and in the rare case, if a candidate is not allotted any of the preferred cities, he/she will be allotted an alternate city. However, candidates will not be able to select the session because it will be assigned randomly.Candidates must pay the registration fee online, through credit cards, debit cards and net banking. After submitting their applications, candidates will be permitted to download their examination admit card from October 24 onwards, till the date of test. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedThe CAT website contains a section on ‘Frequently Asked Questions’ (FAQ) that addresses some of the commonly asked queries regarding the examination. Candidates may also contact the CAT help desk through email or phone. Candidates will be allotted exactly 60 minutes for answering questions in each section and they are not allowed to switch from one section to another while answering questions in a particular section. Candidates are advised to work on the tutorials available on the CAT website well in advance.last_img read more


Priya Cinema to open gates for Kolkatans after 6month hiatus


first_imgKolkata: After being closed for more than six months, the iconic Priya Cinema on Rashbehari Avenue of South Kolkata, will open its gates for cinephiles on Thursdayfor a special screening of Satyajit Ray’s cult classic Goopy Gyne Bagha Byne. The landmark single-screen theatre, which had been closed off in the month of August after a fire broke out during one of its late-night movie screenings, has now been revamped with state-of-the-art seating arrangements and fire safety equipment. Also Read – Bose & Gandhi: More similar than apart, says Sugata BoseThe opening of the theatre has created quite a buzz in Tollywood, as celebrities like Prosenjit Chatterjee, Kaushik Ganguly, Arindam Sil, Gautam Ghosh and Abir Chatterjee, among others, will be attending the special screening which will begin from 7 pm. Arijit Dutta, the managing director of Priya Entertainment Private Limited, said: “After receiving licence from the Fire department, the theatre will be restarted on Thursday evening, where we will have a special screening of Goopy Gyne Bagha Byne in the original black and white 35 mm print. We have particularly selected this movie for the screening as I feel that it is the most iconic Bengali movie ever made, which is of concern to the general public nationally as well as internationally.” Also Read – Rs 13,000 crore investment to provide 2 lakh jobs: MamataApart from the celebrities, a considerable number of moviegoers will also get a taste of the special screening as some passes have also been handed out to the general public free of cost, after announcements were made on social media about the same. Extending his regards to Chief Minister Mamata Banerjee for her support, Dutta said: “We have installed the most modern fire safety equipment available in the market and the structures of the building, which is more than 65 years old, have also been repaired. The seating arrangements have also been refashioned as we have drastically increased the leg space by bringing down the number of seats from 930 to 540, apart from installing recliner sofas. We have also done some revamping to the interiors as the lobby has been done up differently.” Followed by Thursday’s special screening, the theatre will screen movies like Kaushik Ganguly’s Nagarkirtan, along with shows of Double Dhamaal and Gully Boy on Friday.last_img read more


Getting started with Amazon Machine Learning workflow Tutorial


first_imgAmazon Machine Learning is useful for building ML models and generating predictions. It also enables the development of robust and scalable smart applications. The process of building ML models with Amazon Machine Learning consists of three operations: data analysis model training evaluation. The code files for this article are available on Github. This tutorial is an excerpt from a book written by Alexis Perrier titled Effective Amazon Machine Learning. The Amazon Machine Learning service is available at https://console.aws.amazon.com/machinelearning/. The Amazon ML workflow closely follows a standard  Data Science workflow with steps: Extract the data and clean it up. Make it available to the algorithm. Split the data into a training and validation set, typically a 70/30 split with equal distribution of the predictors in each part. Select the best model by training several models on the training dataset and comparing their performances on the validation dataset. Use the best model for predictions on new data. As shown in the following Amazon ML menu, the service is built around four objects: Datasource ML model Evaluation Prediction The Datasource and Model can also be configured and set up in the same flow by creating a new Datasource and ML model. Let us take a closer look at each one of these steps. Understanding the dataset used We will use the simple Predicting Weight by Height and Age dataset (from Lewis Taylor (1967)) with 237 samples of children’s age, weight, height, and gender, which is available at https://v8doc.sas.com/sashtml/stat/chap55/sect51.htm. This dataset is composed of 237 rows. Each row has the following predictors: sex (F, M), age (in months), height (in inches), and we are trying to predict the weight (in lbs) of these children. There are no missing values and no outliers. The variables are close enough in range and normalization is not required. We do not need to carry out any preprocessing or cleaning on the original dataset. Age, height, and weight are numerical variables (real-valued), and sex is a categorical variable. We will randomly select 20% of the rows as the held-out subset to use for prediction on previously unseen data and keep the other 80% as training and evaluation data. This data split can be done in Excel or any other spreadsheet editor: By creating a new column with randomly generated numbers Sorting the spreadsheet by that column Selecting 190 rows for training and 47 rows for prediction (roughly a 80/20 split) Let us name the training set LT67_training.csv and the held-out set that we will use for prediction LT67_heldout.csv, where LT67 stands for Lewis and Taylor, the creator of this dataset in 1967. As with all datasets, scripts, and resources mentioned in this book, the training and holdout files are available in the GitHub repository at https://github.com/alexperrier/packt-aml. It is important for the distribution in age, sex, height, and weight to be similar in both subsets. We want the data on which we will make predictions to show patterns that are similar to the data on which we will train and optimize our model. Loading the data on S3 Follow these steps to load the training and held-out datasets on S3: Go to your s3 console at https://console.aws.amazon.com/s3. Create a bucket if you haven’t done so already. Buckets are basically folders that are uniquely named across all S3. We created a bucket named aml.packt. Since that name has now been taken, you will have to choose another bucket name if you are following along with this demonstration. Click on the bucket name you created and upload both the LT67_training.csv and LT67_heldout.csv files by selecting Upload from the Actions drop-down menu: Both files are small, only a few KB, and hosting costs should remain negligible for that exercise. Note that for each file, by selecting the Properties tab on the right, you can specify how your files are accessed, what user, role, group or AWS service may download, read, write, and delete the files, and whether or not they should be accessible from the Open Web. When creating the datasource in Amazon ML, you will be prompted to grant Amazon ML access to your input data. You can specify the access rules to these files now in S3 or simply grant access later on. Our data is now in the cloud in an S3 bucket. We need to tell Amazon ML where to find that input data by creating a datasource. We will first create the datasource for the training file ST67_training.csv. Declaring a datasource Go to the Amazon ML dashboard, and click on Create new… | Datasource and ML model. We will use the faster flow available by default: As shown in the following screenshot, you are asked to specify the path to the LT67_training.csv file {S3://bucket}{path}{file}. Note that the S3 location field automatically populates with the bucket names and file names that are available to your user: Specifying a Datasource name is useful to organize your Amazon ML assets. By clicking on Verify, Amazon ML will make sure that it has the proper rights to access the file. In case it needs to be granted access to the file, you will be prompted to do so as shown in the following screenshot: Just click on Yes to grant access. At this point, Amazon ML will validate the datasource and analyze its contents. Creating the datasource An Amazon ML datasource is composed of the following: The location of the data file: The data file is not duplicated or cloned in Amazon ML but accessed from S3 The schema that contains information on the type of the variables contained in the CSV file: Categorical Text Numeric (real-valued) Binary It is possible to supply Amazon ML with your own schema or modify the one created by Amazon ML. At this point, Amazon ML has a pretty good idea of the type of data in your training dataset. It has identified the different types of variables and knows how many rows it has: Move on to the next step by clicking on Continue, and see what schema Amazon ML has inferred from the dataset as shown in the next screenshot: Amazon ML needs to know at that point which is the variable you are trying to predict. Be sure to tell Amazon ML the following: The first line in the CSV file contains te column name The target is the weight We see here that Amazon ML has correctly inferred the following: sex is categorical age, height, and weight are numeric (continuous real values) Since we chose a numeric variable as the target Amazon ML, will use Linear Regression as the predictive model. For binary or categorical values, we would have used Logistic Regression. This means that Amazon ML will try to find the best a, b, and c coefficients so that the weight predicted by the following equation is as close as possible to the observed real weight present in the data: predicted weight = a * age + b * height + c * sex Amazon ML will then ask you if your data contains a row identifier. In our present case, it does not. Row identifiers are useful when you want to understand the prediction obtained for each row or add an extra column to your dataset later on in your project. Row identifiers are for reference purposes only and are not used by the service to build the model. You will be asked to review the datasource. You can go back to each one of the previous steps and edit the parameters for the schema, the target and the input data. Now that the data is known to Amazon ML, the next step is to set up the parameters of the algorithm that will train the model. Understanding the model We select the default parameters for the training and evaluation settings. Amazon ML will do the following: Create a recipe for data transformation based on the statistical properties it has inferred from the dataset Split the dataset (ST67_training.csv) into a training part and a validation part, with a 70/30 split. The split strategy assumes the data has already been shuffled and can be split sequentially. The recipe will be used to transform the data in a similar way for the training and the validation datasets. The only transformation suggested by Amazon ML is to transform the categorical variable sex into a binary variable, where m = 0 and f = 1 for instance. No other transformation is needed. The default advanced settings for the model are shown in the following screenshot: We see that Amazon ML will pass over the data 10 times, shuffle splitting the data each time. It will use an L2 regularization strategy based on the sum of the square of the coefficients of the regression to prevent overfitting. We will evaluate the predictive power of the model using our LT67_heldout.csv dataset later on. Regularization comes in 3 levels with a mild (10^-6), medium (10^-4), or aggressive (10^-02) setting, each value stronger than the previous one. The default setting is mild, the lowest, with a regularization constant of 0.00001 (10^-6) implying that Amazon ML does not anticipate much overfitting on this dataset. This makes sense when the number of predictors, three in our case, is much smaller than the number of samples (190 for the training set). Clicking on the Create ML model button will launch the model creation. This takes a few minutes to resolve, depending on the size and complexity of your dataset. You can check its status by refreshing the model page. In the meantime, the model status remains pending. At that point, Amazon ML will split our training dataset into two subsets: a training and a validation set. It will use the training portion of the data to train several settings of the algorithm and select the best one based on its performance on the training data. It will then apply the associated model to the validation set and return an evaluation score for that model. By default, Amazon ML will sequentially take the first 70% of the samples for training and the remaining 30% for validation. It’s worth noting that Amazon ML will not create two extra files and store them on S3, but instead create two new datasources out of the initial datasource we have previously defined. Each new datasource is obtained from the original one via a Data rearrangement JSON recipe such as the following: { “splitting”: { “percentBegin”: 0, “percentEnd”: 70 }} You can see these two new datasources in the Datasource dashboard. Three datasources are now available where there was initially only one, as shown by the following screenshot: While the model is being trained, Amazon ML runs the Stochastic Gradient algorithm several times on the training data with different parameters: Varying the learning rate in increments of powers of 10: 0.01, 0.1, 1, 10, and 100. Making several passes over the training data while shuffling the samples before each path. At each pass, calculating the prediction error, the Root Mean Squared Error (RMSE), to estimate how much of an improvement over the last pass was obtained. If the decrease in RMSE is not really significant, the algorithm is considered to have converged, and no further pass shall be made. At the end of the passes, the setting that ends up with the lowest RMSE wins, and the associated model (the weights of the regression) is selected as the best version. Once the model has finished training, Amazon ML evaluates its performance on the validation datasource. Once the evaluation itself is also ready, you have access to the model’s evaluation. Evaluating the model Amazon ML uses the standard metric RMSE for linear regression. RMSE is defined as the sum of the squares of the difference between the real values and the predicted values: Here, ŷ is the predicted values, and y the real values we want to predict (the weight of the children in our case). The closer the predictions are to the real values, the lower the RMSE is. A lower RMSE means a better, more accurate prediction. Making batch predictions We now have a model that has been properly trained and selected among other models. We can use it to make predictions on new data. A batch prediction consists in applying a model to a datasource in order to make predictions on that datasource. We need to tell Amazon ML which model we want to apply on which data. Batch predictions are different from streaming predictions. With batch predictions, all the data is already made available as a datasource, while for streaming predictions, the data will be fed to the model as it becomes available. The dataset is not available beforehand in its entirety. In the Main Menu select Batch Predictions to access the dashboard predictions and click on Create a New Prediction: The first step is to select one of the models available in your model dashboard. You should choose the one that has the lowest RMSE: The next step is to associate a datasource to the model you just selected. We had uploaded the held-out dataset to S3 at the beginning of this chapter (under the Loading the data on S3 section) but had not used it to create a datasource. We will do so now.When asked for a datasource in the next screen, make sure to check My data is in S3, and I need to create a datasource, and then select the held-out dataset that should already be present in your S3 bucket: Don’t forget to tell Amazon ML that the first line of the file contains columns. In our current project, our held-out dataset also contains the true values for the weight of the students. This would not be the case for “real” data in a real-world project where the real values are truly unknown. However, in our case, this will allow us to calculate the RMSE score of our predictions and assess the quality of these predictions. The final step is to click on the Verify button and wait for a few minutes: Amazon ML will run the model on the new datasource and will generate predictions in the form of a CSV file. Contrary to the evaluation and model-building phase, we now have real predictions. We are also no longer given a score associated with these predictions. After a few minutes, you will notice a new batch-prediction folder in your S3 bucket. This folder contains a manifest file and a results folder. The manifest file is a JSON file with the path to the initial datasource and the path to the results file. The results folder contains a gzipped CSV file: Uncompressed, the CSV file contains two columns, trueLabel, the initial target from the held-out set, and score, which corresponds to the predicted values. We can easily calculate the RMSE for those results directly in the spreadsheet through the following steps: Creating a new column that holds the square of the difference of the two columns. Summing all the rows. Taking the square root of the result. The following illustration shows how we create a third column C, as the squared difference between the trueLabel column A and the score (or predicted value) column B: As shown in the following screenshot, averaging column C and taking the square root gives an RMSE of 11.96, which is even significantly better than the RMSE we obtained during the evaluation phase (RMSE 14.4): The fact that the RMSE on the held-out set is better than the RMSE on the validation set means that our model did not overfit the training data, since it performed even better on new data than expected. Our model is robust. The left side of the following graph shows the True (Triangle) and Predicted (Circle) Weight values for all the samples in the held-out set. The right side shows the histogram of the residuals. Similar to the histogram of residuals we had observed on the validation set, we observe that the residuals are not centered on 0. Our model has a tendency to overestimate the weight of the students: In this tutorial, we have successfully performed the loading of the data on S3 and let Amazon ML infer the schema and transform the data. We also created a model and evaluated its performance. Finally, we made a prediction on the held -out dataset. To understand how to leverage Amazon’s powerful platform for your predictive analytics needs,  check out this book Effective Amazon Machine Learning. Read Next Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics Amazon Sagemaker makes machine learning on the cloud easy Amazon ML Solutions Lab to help customers “work backwards” and leverage machine learninglast_img read more