It was fascinating to see how the audience hung on the kers' lips

TG Data Set: A collection for training AI models.
Post Reply
Bappy10
Posts: 788
Joined: Sat Dec 21, 2024 5:31 am

It was fascinating to see how the audience hung on the kers' lips

Post by Bappy10 »

Burton clearly attracted the Trekkies in the audience. He managed to beautifully illustrate what Star Trek has meant for making imaginable what new technologies can mean for humanity. From Star Trek's Holodeck, mobile phones to Google Glasses, which of course bears similarities to La Forge's glasses in Star Trek. However, the most impressive was the eloquent and convincing Mae Jemison. She emphasized the value of space innovations in our daily lives and indicated that we must dare to dream in order to get somewhere. If we want to solve the grand challenges , do not strive for marginal improvements, but go for radical innovations, that is the only way to get somewhere.
I found myself resolving to monitor the program for the coming years. For us Europeans, it was a great lesson in thinking big . That's the only way to get to the moon.

Interact with the TV: Say it, wave it, take it 3D
This panel discussion about new forms of interaction with the TV screen looked interesting on paper. The enormous freedom of choice in what you want to see, when you want to see it and with whom you want to see it, creates important challenges. It is clear that classic television is no longer really sufficient. But what then?

Gesture TV

The panel consists of Gary Clayton (Nuance Communications), Ohad Shvueli (PrimeSense), Tom list to data Woods (Rovi Corporation) and is moderated by Tom Adams (IHS). From the start it is clear that the moderator is not connecting with the panelists. He asks questions that are not relevant to the discussion and that fall outside the panelists' area of ​​expertise. They clearly do not know what to do with them either. About halfway through the discussion Gary clearly gets fed up and, supported by Ohad, takes over the lead, which caused a wave of relief in the room.

Gary and Ohad discuss the challenges and possible solutions of modern TV viewing. Personalization, distribution and interaction using gestures, voice control or combinations are discussed. Many topics are touched upon, but it all remains stuck at the level of signaling current problems and daydreaming about possible solutions. Tom has checked out and actually adds nothing more until the end of the discussion.

And so the really interesting issues were not addressed. How does the second screen relate to the big screen ? To what extent can personalization and intelligence contribute to a clearer offering and a more intuitive interface? Does watching TV together as a social activity have a future or is that a thing of the past? We often see the panel discussion formula fail at SXSW. Tom Adams could have gotten his revenge in this panel by asking these experts the question: 'what progress have we actually made in the past 10 years?' Fortunately, SXSW is not over yet and there are still plenty of opportunities..

A robot in your pocket
The two jolly gentlemen Jeff Bonforte of Xobni and Amit Kapur of Gravity tell us that there are more and more robots in our lives. He starts with a quote from Douglas Adams, who defined a robot as “your plastic pall who is fun to be with”. The collective image of robots is determined by mechanical men. A better definition of a robot, according to him, is “a piece of technology that makes decisions on our behalf”. Artificial intelligence, in other words.

Robots

Bonforte and Kapur cleverly and humorously tie together the trends Big Data, Artificial Intelligence and mobile technology. A 'robot' can only make choices if it can establish relationships between data that are not connected to each other. For example, a location only becomes meaningful if you know more about the person who is there. Take someone who is in the Amsterdam Red Light District and has a search query via their mobile phone. If that person is an Amsterdammer and happens to live nearby, they are probably looking for something completely different than a British tourist. The ability to interpret the total context is what makes a piece of software truly intelligent.

Bonforte shows us two ways in how you can determine whether it will rain today. You can look at barometers and cloud patterns and determine from that that it will probably rain. But you can also ask for the choice of footwear of 1 million people and on that basis judge whether it will rain. A stochastic approach that is possible by establishing relationships from “ big data ”. And an implicit relationship.

Implicit vs. Explicit Data
Because the distinction between explicit and implicit data is very interesting for developing intelligent services. The latter is often more precise and durable than the former. For example, who are your favorite people from your address book? If you look at which people you often send emails to and with whom you often make appointments in your calendar, you can deduce who your favorites are. You don't have to mark that explicitly. An implicit connection is potentially more durable, because computers have the ability to constantly adjust this. Anyone who has tried to categorize a group of friends in Facebook for the umpteenth time knows how true this is.

Google Now is also used as an example of what is already possible. Google Now can make predictions based on behavioral patterns. If you drive from a certain address to another address every day, Google Now can conclude that this is probably your commute and can give you travel advice based on this pattern. We can therefore say goodbye to the image of robots as mechanical beings. A smartphone comes much closer.
Post Reply