Tag: 上海楼风兼职论坛


WICB to prioritise fast bowling


first_imgST JOHN’S, Antigua (CMC): The West Indies Cricket Board (WICB) says it wants to stage a series of fast bowling camps as part of a plan to rekindle an area of the game that has been struggling in recent years. WICB director of cricket, Richard Pybus, made the announcement at the conclusion of the 10-round Professional Cricket League (PCL) Regional four-day Tournament on Monday. The WICB is pondering the introduction of an off-season training programme targeting fast bowlers after spinners dominated the just ended four-day tournament. “We are prioritising and looking at some camps for our fast bowlers, possibly some measures off season to prioritise fast bowling in the four-day competition,” said Pybus in an interview with WICB media. “This is going to be central to us getting that back at the heart of West Indies cricket again.” Spinners featured prominently during the tournament, including the top wicket- taker, Jamaican  spinner Nikita Miller, who had 65 scalps in nine matches. “The competition has been still dominated too much by the spin bowlers,” said Pybus. “That is something that we will have to seriously address during the off season to make sure that we are prioritising the fast bowlers.” Guyana’s Leon Johnson, with 807, scored the most runs for the season, followed by Devon Smith of the Windward Islands, who scored 719 — though from two matches less. Guyana’s Vishal Singh and Barbados’ Royston Chase were other players who scored over 700 runs. “I think something which is exceptionally positive is the quality of the batting. We are getting a consistency in the scoring,” said Pybus. “We got a core group of young players who have put their hand up all the way through the competition. We are getting consistent with hundreds being scored. Volume of hundreds and volume of runs which I think is very positive.” Guyana Jaguars were crowned champions of the R4Day for the second straight year. They finished with 149 points — seven clear of nearest rivals Barbados Pride — to regain the George Headley/Everton Weekes Trophy, symbol of regional first-class supremacy. “The first season of the PCL was very rushed and the systems we wanted to put in place to be able to support it were not where we wanted them to be,” he said. “So this year is closer to where I would like to see the system in terms of providing support to the players and the structure of the season regarding the off season programmes for the players.”last_img read more


Firefighters Quickly Surround Fire Between La Jolla and Miramar


first_img 00:00 00:00 spaceplay / pause qunload | stop ffullscreenshift + ←→slower / faster ↑↓volume mmute ←→seek  . seek to previous 12… 6 seek to 10%, 20% … 60% XColor SettingsAaAaAaAaTextBackgroundOpacity SettingsTextOpaqueSemi-TransparentBackgroundSemi-TransparentOpaqueTransparentFont SettingsSize||TypeSerif MonospaceSerifSans Serif MonospaceSans SerifCasualCursiveSmallCapsResetSave SettingsSAN DIEGO – (KUSI) A brush fire erupted on the western side of MCAS Miramar, near the 805 Freeway. Fire crews have been battling the blaze since lunchtime.The fire is about 90 percent contained and only burned about 2 acres, but that’s because of the fire crews working fast over a three hour period. Flames could have easily spread to Miramar National Cemetery.Officials stated the fire is under control, but expect to be there until it is fully contained.No structures, including the cemetery, and no injuries have been reported at this time. Firefighters Quickly Surround Fire Between La Jolla and Miramar Ashlie Rodriguez, Ashlie Rodriguez July 30, 2018 Posted: July 30, 2018 Categories: Local San Diego News, Wildfires FacebookTwitterlast_img read more


Colours of concern


first_imgThe Capital gears up for the fourth edition of India on Canvas that opens tomorrow. It is an endeavour that brings together some of the most eminent personalities with some of the greatest names in the art fraternity to jointly produce canvas that will be auctioned for Khushii (Kinship for Humanitarian, Social and Holistic Intervention in India), a non-governmental organisation led by a team of committed philanthropists and headed by legendary cricketer, Kapil Dev. Also Read – ‘Playing Jojo was emotionally exhausting’The artworks are open for preview from January 9 till 15. India on Canvas, with its unusual concept, has been a successful platform in all its previous three editions, helping merge the divide between the underprivileged and the privileged.  The auction will take place on January 15 at the residence of the British High Commissioner, New Delhi.India on Canvas Edition IV will also see a work created by the Prime Minister, Narendra Modi in collaboration with celebrated artist Satish Gupta. The painting titled Om NAMO Shivay is a tribute to Lord Shiva and is a beautiful melange of sculpture and painting on a single canvas. Also Read – Leslie doing new comedy special with NetflixThis edition of India on Canvas has also produced some extremely unique works of art ever created in India. Personalities such as the Finance Minister, Arun Jaitley, MC Mary Kom, Shahrukh Khan, Attorney General Mukul Rohatgi, Sister Shivani Brahamkumari,  Chetan Bhagat, Sadhguru Jaggi Vasudev, Rekha Vishal Bhardwaj and many respectable others spared their valuable time to paint for the cause just as they enjoyed discovering the creative side of their personality.  India on Canvas is also being supported by India Inc with names like Kiran Mazumdar Shaw, Pinky Reddy, Harsh Neotia, Tarini Jindal, Amit Kalyani, Arti Kirloskar, Bindu Kapoor and many others lending their support. Some of the biggest names in Indian art such as Akbar Padamsee, Satish Gupta, GR Iranna, Sanjay Bhattacharya, Anjolie Ela Menon, Ranbir Kaleka, Paresh Maity, Jayasri Burman, Mithu Sen, TV Santosh, Bose Krishnamachari, Manu Parekh, Seema Kohli and others have put in their best efforts to accommodate the eminent Indians’ creativity and blend it with their own creative styles.When: January 9 – 15 Time: 11 am- 6pm Where: Khushii 45, Silver Oaks Farm, Vasant Kunjlast_img read more


CAT 2018 to be held on Nov 25


first_imgKolkata: The Indian Institutes of Managements (IIMs) will release the notification for the Common Admission Test 2018 (CAT) on Sunday. The registration for CAT 2018 will start from August 8 until September 19.According to Prof Sumanta Basu, convener, CAT 2018, the examination will be conducted on November 25, 2018 (Sunday) in two sessions. The test centres will be spread across 147 cities.Candidates will be given the option to select four test cities in order of preference. Cities and centres will be assigned to the candidates only after the last date of CAT 2018 registration. Hence, candidates need not rush to block slots and cities in the initial days of registration. Also Read – Rain batters Kolkata, cripples normal lifeThe authorities will try their best to assign candidates to their first preferred city. In case it is not possible, they will be assigned a city following their given order of preference and in the rare case, if a candidate is not allotted any of the preferred cities, he/she will be allotted an alternate city. However, candidates will not be able to select the session because it will be assigned randomly.Candidates must pay the registration fee online, through credit cards, debit cards and net banking. After submitting their applications, candidates will be permitted to download their examination admit card from October 24 onwards, till the date of test. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedThe CAT website contains a section on ‘Frequently Asked Questions’ (FAQ) that addresses some of the commonly asked queries regarding the examination. Candidates may also contact the CAT help desk through email or phone. Candidates will be allotted exactly 60 minutes for answering questions in each section and they are not allowed to switch from one section to another while answering questions in a particular section. Candidates are advised to work on the tutorials available on the CAT website well in advance.last_img read more


Strengthening IndoThai cultural bond


first_imgNamaste Thailand Festival, organised by Royal Thai Embassy, New Delhi, is coming back with loads of fun-filled activities and workshops from March 15 – 17, 2019 at Select City Walk, New Delhi. The cultural fiesta is being organised to commemorate 72 years of Indo-Thai diplomatic relations.The formation of bilateral relations between Thailand and India has witnessed immense growth since 1947. The religious, cultural, mythological and commercial exchanges between the two countries have existed since centuries. Therefore a festival of Thai food, products, music, performances, and culture is all set to further strengthen the cultural bond and enthrall the Delhiites. Also Read – Add new books to your shelfThe three-day event will offer a range of wonderful activities and workshops where one can enjoy with Mulberry paper mini umbrella, body painting, button badge activities, posing with Thai costumes and a lot more. The festival will bring a diverse range of stalls which include women’s fashion, paper flowers, accessories, jewellery, home decorations, relaxation aromas and souvenirs to add extra Thai cultural flavor to the festival. Three quiz competitions about Thailand and its culture are going to be the highlight of the festival. Also, visitors can also experience the exquisite Thai cuisines from the much-popular ‘Nueng Roi’ by Radisson Blu. Renowned Thai artists like Asia-7 and a Thai Jazz-fusion band will culminate the festival on a musical note.last_img read more


Getting started with Amazon Machine Learning workflow Tutorial


first_imgAmazon Machine Learning is useful for building ML models and generating predictions. It also enables the development of robust and scalable smart applications. The process of building ML models with Amazon Machine Learning consists of three operations: data analysis model training evaluation. The code files for this article are available on Github. This tutorial is an excerpt from a book written by Alexis Perrier titled Effective Amazon Machine Learning. The Amazon Machine Learning service is available at https://console.aws.amazon.com/machinelearning/. The Amazon ML workflow closely follows a standard  Data Science workflow with steps: Extract the data and clean it up. Make it available to the algorithm. Split the data into a training and validation set, typically a 70/30 split with equal distribution of the predictors in each part. Select the best model by training several models on the training dataset and comparing their performances on the validation dataset. Use the best model for predictions on new data. As shown in the following Amazon ML menu, the service is built around four objects: Datasource ML model Evaluation Prediction The Datasource and Model can also be configured and set up in the same flow by creating a new Datasource and ML model. Let us take a closer look at each one of these steps. Understanding the dataset used We will use the simple Predicting Weight by Height and Age dataset (from Lewis Taylor (1967)) with 237 samples of children’s age, weight, height, and gender, which is available at https://v8doc.sas.com/sashtml/stat/chap55/sect51.htm. This dataset is composed of 237 rows. Each row has the following predictors: sex (F, M), age (in months), height (in inches), and we are trying to predict the weight (in lbs) of these children. There are no missing values and no outliers. The variables are close enough in range and normalization is not required. We do not need to carry out any preprocessing or cleaning on the original dataset. Age, height, and weight are numerical variables (real-valued), and sex is a categorical variable. We will randomly select 20% of the rows as the held-out subset to use for prediction on previously unseen data and keep the other 80% as training and evaluation data. This data split can be done in Excel or any other spreadsheet editor: By creating a new column with randomly generated numbers Sorting the spreadsheet by that column Selecting 190 rows for training and 47 rows for prediction (roughly a 80/20 split) Let us name the training set LT67_training.csv and the held-out set that we will use for prediction LT67_heldout.csv, where LT67 stands for Lewis and Taylor, the creator of this dataset in 1967. As with all datasets, scripts, and resources mentioned in this book, the training and holdout files are available in the GitHub repository at https://github.com/alexperrier/packt-aml. It is important for the distribution in age, sex, height, and weight to be similar in both subsets. We want the data on which we will make predictions to show patterns that are similar to the data on which we will train and optimize our model. Loading the data on S3 Follow these steps to load the training and held-out datasets on S3: Go to your s3 console at https://console.aws.amazon.com/s3. Create a bucket if you haven’t done so already. Buckets are basically folders that are uniquely named across all S3. We created a bucket named aml.packt. Since that name has now been taken, you will have to choose another bucket name if you are following along with this demonstration. Click on the bucket name you created and upload both the LT67_training.csv and LT67_heldout.csv files by selecting Upload from the Actions drop-down menu: Both files are small, only a few KB, and hosting costs should remain negligible for that exercise. Note that for each file, by selecting the Properties tab on the right, you can specify how your files are accessed, what user, role, group or AWS service may download, read, write, and delete the files, and whether or not they should be accessible from the Open Web. When creating the datasource in Amazon ML, you will be prompted to grant Amazon ML access to your input data. You can specify the access rules to these files now in S3 or simply grant access later on. Our data is now in the cloud in an S3 bucket. We need to tell Amazon ML where to find that input data by creating a datasource. We will first create the datasource for the training file ST67_training.csv. Declaring a datasource Go to the Amazon ML dashboard, and click on Create new… | Datasource and ML model. We will use the faster flow available by default: As shown in the following screenshot, you are asked to specify the path to the LT67_training.csv file {S3://bucket}{path}{file}. Note that the S3 location field automatically populates with the bucket names and file names that are available to your user: Specifying a Datasource name is useful to organize your Amazon ML assets. By clicking on Verify, Amazon ML will make sure that it has the proper rights to access the file. In case it needs to be granted access to the file, you will be prompted to do so as shown in the following screenshot: Just click on Yes to grant access. At this point, Amazon ML will validate the datasource and analyze its contents. Creating the datasource An Amazon ML datasource is composed of the following: The location of the data file: The data file is not duplicated or cloned in Amazon ML but accessed from S3 The schema that contains information on the type of the variables contained in the CSV file: Categorical Text Numeric (real-valued) Binary It is possible to supply Amazon ML with your own schema or modify the one created by Amazon ML. At this point, Amazon ML has a pretty good idea of the type of data in your training dataset. It has identified the different types of variables and knows how many rows it has: Move on to the next step by clicking on Continue, and see what schema Amazon ML has inferred from the dataset as shown in the next screenshot: Amazon ML needs to know at that point which is the variable you are trying to predict. Be sure to tell Amazon ML the following: The first line in the CSV file contains te column name The target is the weight We see here that Amazon ML has correctly inferred the following: sex is categorical age, height, and weight are numeric (continuous real values) Since we chose a numeric variable as the target Amazon ML, will use Linear Regression as the predictive model. For binary or categorical values, we would have used Logistic Regression. This means that Amazon ML will try to find the best a, b, and c coefficients so that the weight predicted by the following equation is as close as possible to the observed real weight present in the data: predicted weight = a * age + b * height + c * sex Amazon ML will then ask you if your data contains a row identifier. In our present case, it does not. Row identifiers are useful when you want to understand the prediction obtained for each row or add an extra column to your dataset later on in your project. Row identifiers are for reference purposes only and are not used by the service to build the model. You will be asked to review the datasource. You can go back to each one of the previous steps and edit the parameters for the schema, the target and the input data. Now that the data is known to Amazon ML, the next step is to set up the parameters of the algorithm that will train the model. Understanding the model We select the default parameters for the training and evaluation settings. Amazon ML will do the following: Create a recipe for data transformation based on the statistical properties it has inferred from the dataset Split the dataset (ST67_training.csv) into a training part and a validation part, with a 70/30 split. The split strategy assumes the data has already been shuffled and can be split sequentially. The recipe will be used to transform the data in a similar way for the training and the validation datasets. The only transformation suggested by Amazon ML is to transform the categorical variable sex into a binary variable, where m = 0 and f = 1 for instance. No other transformation is needed. The default advanced settings for the model are shown in the following screenshot: We see that Amazon ML will pass over the data 10 times, shuffle splitting the data each time. It will use an L2 regularization strategy based on the sum of the square of the coefficients of the regression to prevent overfitting. We will evaluate the predictive power of the model using our LT67_heldout.csv dataset later on. Regularization comes in 3 levels with a mild (10^-6), medium (10^-4), or aggressive (10^-02) setting, each value stronger than the previous one. The default setting is mild, the lowest, with a regularization constant of 0.00001 (10^-6) implying that Amazon ML does not anticipate much overfitting on this dataset. This makes sense when the number of predictors, three in our case, is much smaller than the number of samples (190 for the training set). Clicking on the Create ML model button will launch the model creation. This takes a few minutes to resolve, depending on the size and complexity of your dataset. You can check its status by refreshing the model page. In the meantime, the model status remains pending. At that point, Amazon ML will split our training dataset into two subsets: a training and a validation set. It will use the training portion of the data to train several settings of the algorithm and select the best one based on its performance on the training data. It will then apply the associated model to the validation set and return an evaluation score for that model. By default, Amazon ML will sequentially take the first 70% of the samples for training and the remaining 30% for validation. It’s worth noting that Amazon ML will not create two extra files and store them on S3, but instead create two new datasources out of the initial datasource we have previously defined. Each new datasource is obtained from the original one via a Data rearrangement JSON recipe such as the following: { “splitting”: { “percentBegin”: 0, “percentEnd”: 70 }} You can see these two new datasources in the Datasource dashboard. Three datasources are now available where there was initially only one, as shown by the following screenshot: While the model is being trained, Amazon ML runs the Stochastic Gradient algorithm several times on the training data with different parameters: Varying the learning rate in increments of powers of 10: 0.01, 0.1, 1, 10, and 100. Making several passes over the training data while shuffling the samples before each path. At each pass, calculating the prediction error, the Root Mean Squared Error (RMSE), to estimate how much of an improvement over the last pass was obtained. If the decrease in RMSE is not really significant, the algorithm is considered to have converged, and no further pass shall be made. At the end of the passes, the setting that ends up with the lowest RMSE wins, and the associated model (the weights of the regression) is selected as the best version. Once the model has finished training, Amazon ML evaluates its performance on the validation datasource. Once the evaluation itself is also ready, you have access to the model’s evaluation. Evaluating the model Amazon ML uses the standard metric RMSE for linear regression. RMSE is defined as the sum of the squares of the difference between the real values and the predicted values: Here, ŷ is the predicted values, and y the real values we want to predict (the weight of the children in our case). The closer the predictions are to the real values, the lower the RMSE is. A lower RMSE means a better, more accurate prediction. Making batch predictions We now have a model that has been properly trained and selected among other models. We can use it to make predictions on new data. A batch prediction consists in applying a model to a datasource in order to make predictions on that datasource. We need to tell Amazon ML which model we want to apply on which data. Batch predictions are different from streaming predictions. With batch predictions, all the data is already made available as a datasource, while for streaming predictions, the data will be fed to the model as it becomes available. The dataset is not available beforehand in its entirety. In the Main Menu select Batch Predictions to access the dashboard predictions and click on Create a New Prediction: The first step is to select one of the models available in your model dashboard. You should choose the one that has the lowest RMSE: The next step is to associate a datasource to the model you just selected. We had uploaded the held-out dataset to S3 at the beginning of this chapter (under the Loading the data on S3 section) but had not used it to create a datasource. We will do so now.When asked for a datasource in the next screen, make sure to check My data is in S3, and I need to create a datasource, and then select the held-out dataset that should already be present in your S3 bucket: Don’t forget to tell Amazon ML that the first line of the file contains columns. In our current project, our held-out dataset also contains the true values for the weight of the students. This would not be the case for “real” data in a real-world project where the real values are truly unknown. However, in our case, this will allow us to calculate the RMSE score of our predictions and assess the quality of these predictions. The final step is to click on the Verify button and wait for a few minutes: Amazon ML will run the model on the new datasource and will generate predictions in the form of a CSV file. Contrary to the evaluation and model-building phase, we now have real predictions. We are also no longer given a score associated with these predictions. After a few minutes, you will notice a new batch-prediction folder in your S3 bucket. This folder contains a manifest file and a results folder. The manifest file is a JSON file with the path to the initial datasource and the path to the results file. The results folder contains a gzipped CSV file: Uncompressed, the CSV file contains two columns, trueLabel, the initial target from the held-out set, and score, which corresponds to the predicted values. We can easily calculate the RMSE for those results directly in the spreadsheet through the following steps: Creating a new column that holds the square of the difference of the two columns. Summing all the rows. Taking the square root of the result. The following illustration shows how we create a third column C, as the squared difference between the trueLabel column A and the score (or predicted value) column B: As shown in the following screenshot, averaging column C and taking the square root gives an RMSE of 11.96, which is even significantly better than the RMSE we obtained during the evaluation phase (RMSE 14.4): The fact that the RMSE on the held-out set is better than the RMSE on the validation set means that our model did not overfit the training data, since it performed even better on new data than expected. Our model is robust. The left side of the following graph shows the True (Triangle) and Predicted (Circle) Weight values for all the samples in the held-out set. The right side shows the histogram of the residuals. Similar to the histogram of residuals we had observed on the validation set, we observe that the residuals are not centered on 0. Our model has a tendency to overestimate the weight of the students: In this tutorial, we have successfully performed the loading of the data on S3 and let Amazon ML infer the schema and transform the data. We also created a model and evaluated its performance. Finally, we made a prediction on the held -out dataset. To understand how to leverage Amazon’s powerful platform for your predictive analytics needs,  check out this book Effective Amazon Machine Learning. Read Next Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics Amazon Sagemaker makes machine learning on the cloud easy Amazon ML Solutions Lab to help customers “work backwards” and leverage machine learninglast_img read more


Google affected by another bug 52M users compromised shut down within 90


first_imgIt has been only two months since Google reported a bug discovery in one of the Google+ People APIs, which affected up to 500,000 Google+ accounts, initiating the shutdown of Google+. Yesterday, Google+ suffered another massive data leak that has impacted approximately 52.5 million users in connection with a Google+ API. This has led Google to expedite the process of shutting down Google+. The access to the Google+ API network will be cut off in the next 90 days and it will shut down completely in April, rather than August next year. In a blog post on Google, David Thacker VP, Product Management, GSuite stated that this bug was added as a part of a software update introduced in November and immediately fixed. However, people are upset that the data leak was disclosed now. The software bug allowed apps that requested permission to view profile information of a Google+ user (name, email address, occupation, age etc), were granted permission even when set to not-public. In addition, Thacker mentions, “apps with access to a user’s Google+ profile data also had access to the profile data that had been shared with the consenting user by another Google+ user but that was not shared publicly.” However, user financial data, national identification numbers, passwords, or similar data typically used for fraud or identity theft, was not given access to. Google discovered the bug as part of its standard testing procedure and says there is “no evidence that the app developers that inadvertently had this access for six days were aware of it or misused.” Google says it’s begun notifying users and enterprise customers who were impacted by the bug. Thacker also says maintaining users’ privacy is Google’s top concern. “We have always taken this seriously, and we continue to invest in our privacy programs to refine internal privacy review processes, create powerful data controls, and engage with users, researchers, and policymakers to get their feedback and improve our programs.” People on Hacker news were highly critical of this data leak and expressed concerns on the kind of organization Google is turning out to be. “I’ve been online since Google was a new up and coming company. There is a world of difference between the civic-mindedness of Google back then and Google now. Google has gone from something genuinely idealistic to something scary and totalitarian. If you aren’t of the same “tribe” as the typical Googler, then basically, you’re a subject.” “So, how does Google, which we all trust with our precious data end up messing up like this several times in a row?If this is the company with the best security team in the world does that mean we should simply abandon all hope” “They could have done soo much more with Google+ … The hype was real up until launch. Really wish they had done things a little differently. Oh well… With all these leaks, I’m actually really glad they weren’t successful with this.” Read Next Google reveals an undisclosed bug that left 500K Google+ accounts vulnerable in early 2018; plans to sunset Google+ consumer version. Google bypassed its own security and privacy teams for Project Dragonfly reveals Intercept Marriott’s Starwood guest database faces a massive data breach affecting 500 million user datalast_img read more