General > Feature requests & roadmap

Virtual 3D AI

<< < (2/7) > >>

grind:
I also like that very much! And the best would be if you could customize the face.

Btw. yours would be Sarah? Maybe because of Eureka? ;)

Nicolai

WhateverFits:
The face is customizable and it is $100 to $850 per license. It has an API but the biggest integration part comes with the expensive license.

Sigg3.net:
Just searched for open source 3D talking head and found Xface. The project died in 2008, but they reached version 1.
http://xface.fbk.eu/index.htm

There are probably heaps of other "3D butler" projects out there if you look for it:P

Problems are not tying it into LMCE, but how it would play with other media playing on the instance..

JaseP:
Blender and Makehuman to make your 3D mesh, then just pre-render video that delivers a video scene for every use case,... Ought to take no more than 50 to 60 GB of hard drive space to store all the animations... And maybe 9 months working 16 hour days to do all the animation.

Sorry for the sarcasm... But, I'm just not all that impressed by this kind of tech. Now, a neural network AI (minus the Final Fantasy-esque digital sex doll) to interface with LinuxMCE,... complete with visual and speech recognition, with predictive capabilities, and maybe a personality,... That would impress me. But we'll have to wait another 15-20 years for that kind of thing.

Armor Gnome:
I have actually considered doing a similiar idea in my own set up.  My ideas though are much more simple and use looped or very long video files with previously recorded text-to-speech statements.  It would be ugly but just nerdy enough for me to love. The other idea besides a human persona was a robotic face or HAL glowing orb on-screen as the audio is played.

My roadblock to this was time and experience to develop it, the slight load time for xine after receiving a play command and most critical, I couldn't play a "blank" video file and an audio file at the same time with the amount of time I put into this.


Implementation idea if anyone is interested in putting time into this for themselves:

--- Code: ---Event = press a category button on an orbiter
Response = loop interface video on local on-screen orbiter, play audio file "Here is a list of your videos.  Which would you like me to retrieve for you?", stop looped video

Event = selecting a video file from data grid and then pressing play
Response = loop interface video on local on-screen orbiter, play audio file "Now loading [filename] please wait while I prepare it for you", stop looped video,  play selected video file.
--- End code ---

Note that this would not function with UI2 as the selections are not separate screens, I am sure with enough effort it could be incorporated but for the short-term a way to play video and audio at the same time would be a huge step for this.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Sitemap 
Go to full version