Like Photography? Love Animals? You’ll Love the Winners of the First Annual “Comedy Wildlife Photography Awards”
Talking to Yourself (Out Loud) Can Help You Learn
But he had an advantage. Ross is a learning researcher, and he’s familiar with the effective, but often underestimated, learning strategy known as self-explaining. The approach revolves around asking oneself explanatory questions like, “What does this mean? Why does it matter?” It really helps to ask them out loud. One study shows that people who explain ideas to themselves learn almost three times more than those who don’t.
To help him outperform his younger colleagues, Ross asked himself lots of questions. He would constantly query himself as he read through the assigned texts. After each paragraph, after each sentence, he would ask himself: “What did I just read? How does that fit together? Have I come across this idea before?”
By the end of the course, Ross had found that, despite his relative inexperience and unfamiliarity with computers, he could answer many questions that the other students couldn’t and understood programming in ways that they didn’t. “I sometimes had the advantage,” he told me. “I was focused on the bigger picture.”
In the modern economy, there are few skills more important than the ability to learn. Around the globe, learning is highly predictive of future earnings. Companies may pay for training or reimburse educational courses, but the skill of gaining skills is rarely taught.
Here’s how to employ self-explaining in your own learning:
Talk to yourself. Self-talk has a bad reputation; muttering to ourselves often seems to be a sign of mental distress. It’s not cool to do in public. But talking to ourselves is crucial to self-explaining and generally helpful for learning. For one thing, it slows us down — and when we’re more deliberate, we typically gain more from an experience.
Ask why. Self-explaining can give voice to impulses of curiosity that may otherwise remain unexplored. It’s about asking ourselves the question, “Why?” Now, if we really know a topic, “why” questions are not that hard. If I asked you a why question about the town that you grew up in, the answer would come pretty easily. It’s when we don’t know something that why questions become more difficult — and create a way to develop an area of expertise.
To illustrate the practice, let’s examine a query like, “Why are there waves?” Some of us can bumble our way to a basic answer. Maybe something like: “Well, waves have to do with the wind. When wind blows across the top of the water, it creates ripples of water.”
But then comes the inevitable follow-up: “Why does the wind lift the water?” or “Why are there waves when there’s no wind?” Here we draw a blank. Or at least I do, and so I start searching for some sort of answer, spinning through the internet, reading up on how energy moves through water. In the end, I’ve learned much more.
Summarize. Summarizing is a simple way to engage in self-explaining, since the act of putting an idea into our own words can promote learning.
You probably have had this experience in your own life. Recall, for instance, a time when you read an article in a magazine and then detailed its argument for a friend. That’s a form of summarizing — you’re more likely to have learned and retained information from that article after you did it.
For another illustration, imagine that you recently wrote an email describing your thoughts on a documentary that you saw on Netflix. In doing so, you fleshed out the idea and engaged in a more direct form of sensemaking. So, all in all, you’ll have a richer sense of the movie and its themes.
You can do this in your own life. The next time a person — your boss, your spouse, a friend — gives you a set of detailed instructions, take the time to verbally repeat the directives. By reciting everything back, you’ll have taken steps to summarize that knowledge, and you’ll be far more likely to remember the information.
Make connections. One of the benefits of self-explaining is that it helps people see new links and associations. Seeing connections helps improve memory. When we’re explaining an idea to ourselves, we should try to look for relationships. That’s one of the reasons that a tool like mnemonics works. We’re better able to remember the colors of the rainbow because we’ve created a link between the first letter of the names of the colors and the acronym ROYGBIV.
When we spot links in an area of expertise, we can gain a richer understanding. This helps explain why Brian Ross had such success using self-explaining. As he learned about computer programming, he tried to explain ideas to himself, relying on different words or concepts. “A lot of what you’re doing in self-explanation is trying to make connections,” Ross told me. “Saying to yourself,”Oh, I see, this works because this leads to that, and that leads to that.'”
Self-explaining should go into the learning tool kit of workers today, as the economy places new demands on making connections and adopting new insights and skills. AT&T CEO Randall Stephenson says technology workers need to learn online for at least five hours per week to fend off obsolescence. They might want to find a solitary place to do so, where they don’t feel abashed about talking out loud to themselves.
Self-talk also helps us think about our thinking. When we’re engaged in a conversation with ourselves, we typically ask ourselves questions along the lines of: “How will I know what I know? What do I find confusing? Do I really know this?” Whether we hit the pause button while listening to a podcast or stop to reflect while reading a manual, we develop skills more effectively by thinking about our thinking.
This article originally appeared at: https://hbr.org/2017/05/talking-to-yourself-out-loud-can-help-you-learn.
Italy Is Giving Away Old Castles For Free, And Here’s How You Can Get One
Amazon made just $5bil in net profit over the past 20 years – In half that time, Facebook made 5X that!
MIT researchers develop a drone system that can do a camera operator’s job
Shooting professional quality video with a drone is not an easy task, and often requires multiple human operators. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) think they’ve found a way to take humans out of the operation part of the equation altogether. The team teased a system this week that they plan to unveil at a conference later this month in which filmmakers can set certain parameters and then let the drone do all the work.
The group calls the system “real-time motion planning for aerial videography,” and it lets a director define basic parameters of a shot, like how tight or how wide the frame should be, or the position of the subject within that frame. They can also change those settings on the fly and the drone will adjust how it’s filming accordingly. And, of course, the drone can dynamically avoid obstacles.
The researchers say that a director using their system would be able to weigh certain variables differently so the drone knows what to prioritize in a shot, too. From the MIT release:
Unless the actors are extremely well-choreographed, the distances between them, the orientations of their bodies, and their distance from obstacles will vary, making it impossible to meet all constraints simultaneously. But the user can specify how the different factors should be weighed against each other. Preserving the actors’ relative locations onscreen, for instance, might be more important than maintaining a precise distance, or vice versa. The user can also assign a weight to minimize occlusion, ensuring that one actor doesn’t end up blocking another from the camera.
It’s a cool idea that’s both reminiscent and seemingly a natural extension of the virtual camera work that directors like James Cameron helped pioneer and others (like Gareth Edwards and Lucasfilm) have been using ever since. It’s definitely not ready for that kind of work, judging from CSAIL’s video. But it’s another important wrinkle in the way new hardware and software is changing filmmaking, big or small.
This article originally appeared at: https://www.theverge.com/2017/5/19/15664208/mit-drone-filmmaking-research-csail.
How The New York Times’ recast R&D unit got back to basics – Digiday
The New York Times’ R&D Lab, started in 2006, grew to a 12-person team that dreamed up products of the future. There was a Minority Report-style “magic mirror” that could serve up information as you’re brushing your teeth. And a conference table that takes notes.
Now, the research unit, recast as Story[X] a year ago, is slimmed down to five people and more focused on the here and now than sci-fi fantasies. Leading the team is Marc Lavallee, who was head of interactive news technology when he was tapped to run the group last September. He reports to Kinsey Wilson, evp of product and technology at the Times. Lavallee and his team of creative technologists — he described them as “misfits” — work as an advance team of sorts for product, editorial and advertising, providing research help.
Lavallee said when he arrived at the Times in 2011, R&D Lab’s mandate was to think three to five years out, but not worry about connecting their work back to the Times’ here and now. The magic mirror is out; the connected home is very much in. The mandate is to be less dreamers about the future and more builders for the present.
“They focused a good bit on building out those artifacts for the future,” he said. “There’s a value to doing that R&D, but what didn’t really happen in practice with any sort of regularity was connecting it back to what the rest of the company was doing. I wanted to flip the model and say, instead of developing this first, then see if there’s an application, start with the need of the newsroom and advertisers and move out from there, instead of the crash landing with something no one was asking for. We don’t want to be a white-paper team making things no one reads.”
The Story[X] team works on the second floor of the Times building near the graphics; interactive news technology; and digital news design desks. Its room is filled with whiteboards, speakers, an Oculus rig and other gadgets. Each quarter, it focuses on a certain area. Last quarter, it was computer vision, now it’s augmented reality and next is the connected home. It’s not dissimilar to the work the Financial Times does at FT Labs or Quartz does with its Bot Studio.
The team has used its computer vision research to speed up the production of this interactive inauguration photo that the newsroom produced. It’s been looking at how it can apply computer vision to news or advertising (by, say, helping the newsroom find the best news photo based on its memorability). It’s also working on demos aimed at showing the newsroom how it might use augmented reality beyond its current applications.
In a sense, the R&D Lab was a product of its time. In 2006, it had more to prove in terms of its tech capabilities. The print and digital sides were much more siloed than they are now, editorial and advertising hardly spoke to each other, and the Times, always highly scrutinized, hadn’t yet proved it could make money from digital subscriptions.
“Ultimately, the legacy of the R&D Lab is the fact that the Times is in such good shape today and is on the forefront of innovation for publishers,” said Sam Mandel, a partner at startup firm Betaworks, in which the Times is an investor. “A lot of things that came out of the lab may not have changed the way they deliver the news, but it’s helped the people who run the Times stay on top of what’s happening in technology.”
“It doesn’t need to be this thing bolted to the side anymore,” Lavallee said.
Story[X] serves three masters — editorial, product and advertising, with priority on editorial. On paper, he admits, it looks like a “challenging mandate” to work across three divisions that have competing interests. Lavallee said he hasn’t run into any conflicts yet. By reporting to Wilson, the unit is set up to ensure it won’t become an adjunct of the advertising department. Coming from a background in news and tech, Lavallee can move fluidly between at least two Times departments. And the internal divisions at the Times have become more porous since 2006, anyway.
“Those parts of the organization are finding ways to all be on the same page about everything,” Lavallee said. “This is one small piece of that. If I do my job right, I make all their jobs easier.”
Amazon Alexa is poised to control the connected ecosystem of the future
Alexa, is Amazon poised to control the connected ecosystem of the future?
Just two weeks after introducing the hands-free camera and style assistant Echo Look, Amazon made another Echo product, Show, available for pre-order -and this time it has a screen. And while an introductory video would have you believe it’s a device that enables us to live better lives, there are many signs Amazon is positioning itself to control the connected ecosystem of the future -and it may very well win.
Or, as Amazon puts it, “Now Alexa can show you things”. And consumer voice queries will receive responses enhanced with visuals.
That’s because the Echo Show, which will be released June 28, has a 7-inch touchscreen from which users can make standard queries, as well as initiate hands-free video calls to users who have an Echo or the Alexa app and monitor their front doors or nurseries and see music lyrics as songs play. In fact, in the video that showcases Echo Show’s features, a man asks Alexa to show him YouTube videos of sponge painting and then selects the video he wants to watch by asking Alexa to “play number three”.
Long live ten blue links?
This marks a vastly different way to search for information with Alexa, which was previously limited to verbal responses. It also means search as we know it may not change as drastically as initially suspected as digital assistants become further ingrained in our lives. In other words, marketers don’t necessarily need to panic about how they will become the single best answer Alexa selects for a given query.
For his part, Michael Dobbs, vice president of SEO at digital marketing agency 360i, said it’s exciting to see a voice input paired with a visual output and Echo Show could pave the way for more engaging experiences overall -and the advent of Echo Show will only accelerate the number of voice queries made because voice search is faster and the screen enhances the interactions consumers have with assistants.
“I think the visual component will add another new layer of relevance and remove friction to get to relevant content faster,” Dobbs said.
An interactive lifestyle device -with a network beyond Amazon
And while Echo and Dot and Google Home have previously interacted with screens via other devices, Echo Show combines this into one device so users can understand the benefit of a voice and graphical user interface in a single package, Dobbs said.
For his part, Chris Colborn, global chief innovation officer at digital agency R/GA, used the phrase “a picture is worth a thousand words” multiple times in talking about the Echo Show, noting a verbal interface can be a challenging way to convey information. In other words, a screen allows the device to convey more information more quickly and for consumers to make decisions more naturally, so an Echo with a screen marks something of a natural progression for Amazon and the Echo family.
“It’s functioning as a mobile phone, but in a hands-free home-based environment,” Colborn added.
Echo Show indeed allows users to make hands-free video calls -and that, too, is an intriguing development.
“When you’re busy making dinner, just ask Alexa to place a call from your Echo Show to anyone with a supported Echo device or the Alexa App,” Amazon says on its pre-order page. “You can also enable a new feature called Drop In for the special cases when you want to connect with your closest friends and family. For example, you can drop in to let the family know it’s time for dinner, see the baby’s nursery or check in with a close relative.”
Per David Hewitt, global mobility lead at global agency SapientRazorfish, Echo Show is poised to do what smart TV manufacturers have failed to do for a decade -and that is to effectively transform a staple media appliance into an interactive lifestyle service.
“Instead of scaring folks out the gate on what Amazon partners and brands are going to do with [a] big brother live camera feed in consumer homes, Amazon has taken a smarter approach to initially focus on intra-family communication and a more one-way approach to [third] party video content,” Hewitt said. “That is not to say Amazon will completely close off video access for third party skill developers -this launch strategy might presumably be to build trust and dependency with basic family-friendly features to then scale to other developers at a more measured pace.”
And this, in turn, could indicate Amazon is now effectively the leader in the drive to create assistant-driven ecosystems that know consumers and make decisions on their behalf, so tasks are performed as if by magic.
“Even at CES [a few years ago], Amazon wasn’t even there, but everyone talked about Alexa,” said Dana DiTomaso, president of digital marketing agency Kick Point. “I think their ecosystem is winning in a lot of ways.”
Colborn agreed recent activity makes it look like Amazon is trying to build its own connected ecosystem -particularly by enabling personal peer-to-peer connections. And, he noted, because the mobile market is already saturated -which is even more interesting when you consider Amazon’s Fire phone was a bit of a dud -this is the new battleground, he added.
“We’re at the point now where mobile is ubiquitous and there is little room for growth and all these electronics companies have to figure out how to go into IoT,” Colborn said.
And the addition of screens and so-called Drop Ins clearly demonstrates the notion of a network beyond Amazon -which extends to friends and family as well.
“We have to see how that plays out…it’s not an unexpected evolution for Amazon, but a new opportunity -they struggled with skills in terms of building good ones,” Colborn said. “But a lot more is known about touchscreens [and] cameras…and there’s a chance to build a stronger interactive paradigm if they can get out quickly and innovate well. But how fast it is adopted and by who is anybody’s guess.”
And citing the $230 price point, Hewitt said he doesn’t think Echo Show will likely have the sales volume of its cheaper siblings.
What’s more, DiTomaso pointed to the perhaps uninspired appearance of Echo Show and questioned whether it was rushed because Amazon heard a player like Google or Apple was working on something similar -particularly since Amazon only just announced a similar device in Look.
“It feels weird they are announcing it so close -and Look is still invitation only. It feels rushed,” she added.
Peeping Tom
Either way, the advent of a digital assistant with eyes as well as ears poses questions about what an always-listening device with access to images of our faces and/or bodies is doing with said information.
“It’s the same security implications of any other of these forever listening devices and the slow march to additional privacy concerns,” DiTomaso said. “If I said to you ten years ago, there will be a device in your home always listening to you in case you said,”Play this song,’ [you would have thought it was crazy], but now not only do we have this, it has a camera [and a] screen as well.”
With Look in particular, Colborn said there’s theoretically an eye in your room and consumers have to worry about when it’s on.
“Amazon has a big interest in gleaning more information from consumers,” Colborn said. “Not just what they’re searching for -what they’re wearing, and who they are interacting with. These are new opportunities to gather information about customers to help with purchase decisions.”
And machine learning can do a lot more with a photo of a user’s face than the sounds in his or her home -and even more than that with photos of his or her entire body.
“[Amazon says] it’s only for fashion advice, but they could change the terms of service,” DiTomaso said. “Sometimes companies get hacked or they’re using devices for slightly different information [than they explicitly talk about] like Unroll.me, [which is a free service that helped users unsubscribe to email newsletters] but it turned out they were reading your email and selling out patterns to advertisers…because these companies have complicated terms of service and can use data in different ways and I don’t think the average consumer understands [how their data is being utilized].”
And while DiTomaso conceded there is value to Look for users who are, say, color blind, she noted it may also start to deliver messages like, “People who look like you bought weight loss equipment”, which raises questions about privacy.
For his part, Hewitt said it will be interesting to see how Amazon manages “such an intimate and precious asset” and Dobbs recommends healthy skepticism when introducing new devices into the home.
“I think there are a ton of exciting things a consumer might want to sacrifice in terms of privacy [and] security. If it helps me have a more seamless experience with getting information or doing research, those are things I’m going to weigh. We can only hope companies are doing their due diligence in terms of keeping information private,” Dobbs said. “From a search perspective, we saw something similar with encrypted search. Search engines used to allow third parties to intercept keywords, but that’s no longer the case. While Alexa [and] Google are recording interactions we have, they are not recording everything all the time on every device…we do need to be worried about who they’re sharing that data with and there should be some privacy questions, but, as a consumer, it’s about balancing value and rewards.”
DiTomaso agreed, advising consumers look at companies with a critical eye.
“People just trust Amazon -it’s one of the top 10 trusted brands,” she added. “But so is Google [and] they’ve done crazy stuff.”
Hot Wheels STUNT RACE- Slow Mo (2500 FPS)
This is the Hot Wheels race of my childhood dreams. It’s a little different of a vid for me but I wanted to learn how to film like the pros.
Summary: I made a Hot Wheels race video between Knight Rider, Scooby Doo, the A-Team van, Ghostbusters Wagon and the Delorean from Back to the Future. It also features our 2 dogs and a bunch of my son’s toys.
This article originally appeared at: https://www.youtube.com/watch?v=vNds3PIBqnQ.
Why Twitter Co-Founder, Biz Stone, is Returning
I worked at Twitter for about six years. In that time, the service grew from zero people to hundreds of millions of people. Jack was the original CEO and when he returned I was very happy.
There’s something about the personality of a company that comes from the folks who start it. There’s a special feeling they bring with them. Jack coming back was a big step forward. And now, it’s my turn—I’m returning to full time work at Twitter starting in a couple of weeks! How this came about is kind of a crazy story but, it’s happening.
How It Happened
A few months ago, I sold the company I most recently founded. The deal did not require me to work at the company we sold to, but I’m the type of person who has to keep working. I’ve made a lot of connections over the years and one of those connections offered me a really sweet gig. I accepted! I had everything all worked out — and then it happened.
Twitter decided to relaunch the Friday afternoon tradition of Tea Time for employees in SF. Jack invited me to join him as “special guest” at this restart of an old tradition. When I stood next to Jack addressing the crowd of employees, I felt the energy, and I was overcome with emotion. I realized in that moment that Twitter was the most important work of my life.
While we were on stage, Jack asked me to come back to work at Twitter. People cheered. But I wasn’t really sure if he meant it. After Tea Time, we spoke privately and Jack told me that he really did — he wanted me to come back and work at Twitter. The company I co-founded, the service I co-invented. I was stunned, but I knew the answer.
What I’ll Be Doing
My top focus will be to guide the company culture, that energy, that feeling. This is where Jack, and Twitter’s inestimable CMO, Leslie Berland, feel I can have the most powerful impact. It’s important that everyone understands the whole story of Twitter and each of our roles in that story. I’ll shape the experience internally so it’s also felt outside the company. More soon.
I’m not replacing anyone at Twitter. Somebody mentioned I’m just filling the “Biz shaped hole” I left. You might even say the job description includes being Biz Stone. Ev said it best when I told him about this turn of events, “Well, you’re among the best in the world at being Biz Stone.” (I’ve worked with him for fifteen years so I recognize his compliments.)
My excitement at the chance to work on Twitter again with Jack, Leslie and the entire team around the world is over-the-moon. As I truly believe, and as I’ve written before, the Tweets must flow. Twitter has woven itself into the fabric of our global society. The world needs Twitter, and it’s here to stay. I’m so lucky that I get to step back in and help shape it’s future.
Biz Stone,
Co-founder, Twitter Inc.