26 December 2009

Auralization: Fundamentals of Acoustics, Modelling, Simulation, Algorithms and Acoustic VR

I received Michael Vorlander's book for X-mas. This Springer publication is a great college-level introduction to the science and technology of 3D sound rendering. Reading chapter 1 took me back to my classical dynamics class at the USAFA! Talyor series expansion (ignoring the higher order terms), wave equations (PDEs) and matrix multiplication await the reader. I recently finished David Gerlenter's 1991 book (232 pp), Mirror Worlds. In it he talks about LifeStreams that feed Mirror Worlds to be built in the not so distance future. Anyway, I am investigating the relative maturity of 3D acoustics, graphics and haptics from a computer hardware (i.e., sound cards, graphics cards, etc.) and software engines (i.e., OpenAL-an API, H3D-an API, and X3D). Vorlander writes realtime binaural rendering is about 10 years away. He quantifies the performance in milliseconds and scene density as the driving factors for plausible perception of binaural sounds synthesized from virtual reality interaction of objects driven by the environment or avatars controlled by humans. There are several ISO standards referenced for coefficients of materials to use in sound synthesis algorithms. The annex has several tables of these materials' coefficients by center frequency.

I just want to know how to approach the requirements for virtual objects to have graphics, acoustics and haptics encodings in a single data structure, namely a scene graph per traded item. Someone's LifeGraph software should properly integrated that data into its NXD and render sound, light and force feedback when touched buy owner wearing haptic gear. If the computing machinery needs to have chipsets for realtime photorealism, binaural sound, and force feedback because a Quad-core processor will not do then how do you figure that out analytically? I haven't read any analysis of rendering algorithms in the context of instruction set architecture and I/O devices that explains this yet.

For example, if my mirror world has a Dell keyboard in it and I want to interact with it by typing on it via my true to life avatar using haptic gloves and wearing binaural headphones, will I hear the click & clack of typing because Dell properly encoded the material coefficients for acoustics and the keys' spring constants for haptics to synthesize the sound of typing as a function of my typing in a room on a desk or my lap with my stereo playing electrica music in the background? LifeGraph software just has to sense the user has the gloves, and binaural headphones attached to the computing machinery executing the simulation of the virtual reality/world immersion and interaction. Should I be able to feel my way around any of my rooms of stuff when I don't have a visual display connected? I mean, I could be wearing a microphone and enter my mirror world at the foyer of my summer home, speak and expect the LifeGraph sound chipset to analyze the sound radiation and reproduce the reverbs and direct reflections using the traded items and room's construction materials in place. I would interpret the sound and knowing my home, figure out where I am as I control my avatar through the house using echos. That's a bit far fetched but it's plausible with the property encodings of the house and everything in it including my avatar.

21 November 2009

Journal of Virtual Worlds Research think piece

I decided to write a think piece on my concept of mobile mirror worlds in the Journal of Virtual Worlds Research. It should be published in the first half of 2010. The purpose for writing a think piece in this journal is to express my thoughts on a usage of virtual reality technologies for everyday business of life as a natural and legal person. And quantify the data, data structures and data transfer dynamics in a comprehensible mirror world system. While my investigation into this aspect of virtual worlds (metaverse) has been brief and hap hazard for most of the last two years, I thought I would have stumbled upon resources and references that addressed the data dynamics of virtual reality technologies over decades of usage for a given purpose. I say decades because the baseline for my conceptual user is a century!

I'm going to start the thought piece with three data-centric estimates for the 100 years in the 21st century of a middle class US citizen (ignoring gender variation) born without complications after a full-term pregnancy, 01 Oct 2000 and who will die 01 Apr 2099. The century cumulative estimates might be

  1. Data volume: 10 x 10^12 bytes
  2. Data structures: 1 x 10^6 files
  3. Data transactions: 3 x 10^5 transactions
Remember there are 876,000 hrs in 36,500 days or 100 years (ignoring leap seconds, days and years). The task is to breakdown these coarse figures into the story of one's life in terms of supplying, adding value and consuming goods and services from trading (usually currency) for them that have been mirrored in virtual reality. The ownership paradigm used for data retention keeps the data volume manageable from my perspective but the number of links to other data owner's sources will be numerous. So much so I should estimate the number of hyperlinks to data sources an X3D scene graph might have to render contextual information that is not owned by the natural person but useful for interpreting and acting on the goods or services rendered. Let's say
  1. Data source: 3 x 10^5 hyperlinks
The effect on readers should be instant eagerness to demand royalty-free X3D models of the goods and services they trade and consume from government, corporations and fellow citizens, and a mobile mirror world system that intelligently manages the semantic processing of ontologically compliant scene graphs of those goods and services.

18 October 2009

My review of Total Recall: How The E-Memory Revolution will Change Everything

Gordon Bell and Jim Gemmell authored Total Recall (ISBN: 978-0-525-95134; http://totalrecallbook.com/) and it was recently published (Sep 2009). I ordered my hardback copy from Amazon.com. It was delivered to my home Sat, 17 Oct 09. I read the foreward by Bill Gates and Chapters One and Two so far. Eager to see the layout and graphics, I flipped through the book and was disappointed by the absence of graphs, tables, and charts. Hoping to reuse some ontological modeling artifacts, I purchased the book certain it would contain some samples of artistic design to ingest, query and/or output points and patterns of one's life. Oh well, the writing style is great--conversational, simple, and light hearted.

Something in Chapter one lead me back to Wired.com's article on Gordon but I clicked on a couple of related links at the bottom of the web page this time. And to my surprise, I read a couple of articles Wired.com wrote on the 2003 DARPA LifeLog project that was never awarded because of public criticisim--privacy and big brother concerns if the technology developed was to proven to be successful. The Pentagon even had its version of LifeLog project, PAL, in the same year. So I checked the index in Total Recall to see if Gordon and Jim acknowledged these efforts and compares the Broad Area Announcements that explain the projects objective to potenial bidders. If you read this post last night, I wrote there was not a reference to DARPA's LifeLog Project. Well I read Chapter Four - Work this morning before going to my office and sure enough, Gordon writes about the DARPA's LifeLog project and names the New York Times reporter who wrote the magazine article that sparked a public outcry over the BAA.

When I conceptualized LifeGraphs, I never considered vacuuming up the digitized inputs that my visual, auditory and tactical senses experience because doing so is outside my paradigm of a mobile mirror world, which is founded on ownership. I see way more things than I own. The same goes for what I hear and touch. I do forsee Virtual Environment and Augmented Reality Cloud-like services that will provide the mirror objects that are adjacent and in proximity for a given date-time group to ones mirror world. I think the complexity of LifeGraphs is less than LifeLogs because LifeGraphs as a system of integrated information flowing over interoperable electronic exchanges (e.g., INTERNET), pushes the harmonization of ontological models of traded and computer-aided designed goods and services on the commerical entities of digital content creation versus the end user. After reading the internet archive of the DARPA BAA 03-30, I can see how the LifeLog innovation and/or inventions in Cognitive Science and AI seek to put a reasoning algorithm at the sensor head or close to it.

18 September 2009

Data Integrity, Information Integration and Institutional Intelligence

I'm working as a program management consultant in a governmental agency's Office of the CIO, and have been since early Aug 2009. Unfortunately, I'm not immersed in an organization of optimal maturity with respect to any maturity model. That being the case, I still foresee opportunities to proffer the ideas of LifeGraphs to the federal agency next year. The business case to fund the LifeGraph pilot, which will be developmental is cost avoidance with respect to data integrity, information integration and institutional intelligence.

In the short six weeks I've been consulting at the client site, I've seen spreadsheet after spreadsheet, powerpoint slide after slide, and word document after document where the data element and values are not referenced, no footnotes or endnotes, nada. With the LifeGraph pilot, data integrity will be conspicuous in every instance of data values under the cover of the many document variants for ad hoc workflows.

I haven't seen a single behavior pattern in writing since I've been there but I've asked about them. While I'm amazed at this situation (I haven't visited the right person or I don't know how to ask my question correctly), I'm confident the use of behavior patterns will lead to information integration and not just data correlation or integration. And when these patterns become near effortless to produce from LifeGraphs, the workers will consider them essential tools for decision making.

The idea of institutional intelligence is a bit of a stretch on words so collective intelligence might be more accurate. This organization's collective intelligence will be founded on individuals interpretation of 3D animations of structural patterns that reveal resources relationships, interfaces and inertia, and behavior patterns of the effects from employing resources. There are user friendly tools today to tap the collective and I want to merge the LifeGraph data structure into the statistical nature of collective intelligence.


25 August 2009

The convergence - Triple Threat or Perfect Storm?

The Virtual Worlds Research Consortium has expressed interest in the LifeGraph concept in the last month. I wrote to the president of the Texas-based organization asking for connections after reading about their upcoming issue that features standards for 3D graphics over the WWW. Their interest will be clearer after we talk Mon, 31 Aug 09. Meanwhile, I have a new assignment (client) that is very Web 2.0 centric and will be for the next decade as far as I can tell. My intent is to engender a genuine X3D interest as I exceed their expectations managing IT projects for enterprise solutions.

I have not made progress documenting LifeGraph Solutions with Enterprise Architect v7.5. I think the creation of what a LifeGraph software is should be illustrated in UML and verbosely annotated since I'm not likely to prototype it in a programming or scripting language anytime soon. Beside the obvious UML diagramming of an architecture in progress, the business plan for LifeGraph Solutions has grown to little more than a cover page preceding a table of contents. Watching the iPhone developer videos on the Apple website is motivating. I see medical applications for patients and providers that are LifeGraph like and that is great. My email to William A. Yasnoff, MD, PhD, FACMI; Karen Trudel and CAPT Mary Forbes, USPHS asking for any baseline health, wellness, and/or medical information creation, derivation, and aggregation for the statistically average male and female who live to 100 years of age will probably go unanswered like most of my inquiries to officials. But I have to ask considering they are Senior Advisor National Health Infrastructure (NHII), Informatics Program Manager, and Federal Health Architecture Program Manager, HHS.

I attended the 2009 SIGGRAPH in New Orleans for a couple of days. My work schedule and budget could not support being there the entire week. The Web3D Consortium members in attendance were a great bunch to hang out with and talk business.

I have yet to cross paths with someone who has native XML database management system experience on mobile devices, X3D browser implementation skills and the abilities to integrate ontological models into XML schemas. The search for those few goes on and my passion to assemble a team or company that is willing to pursue the same possibilities with LifeGraphs as the nexus between Governments, Corporations and Citizens continues.

03 July 2009

DC ACM Talk w/Nick Polys was a success

I opened, closed then answered several questions in front of an audience of 35 computer enthusiasts and professionals 10 Jun 2009 at the AAAS Building from 07:30 p.m. to 9:15 p.m. Nick spoke for 25 minutes on the technical history and outlook of the Extensible 3D specifications, implementations and emendations. Many in the audience wonder about the societal effects of LifeGraphs if they were to emerge under the conditions I described. As expected, a few opinions were negative from the perspective of "Big Brother" issues. These opinions were balanced with questions like, "Are you going to write a book?" and "How do you plan to develop this?"

I continue to query CIOs in health and medical centers about a womb to postmortem health maintenance data (reference) model. And I continue to get no replies to what I think is a simple statistical question. I have enlisted someone from CSC Healthcare to ask the question whenever the opportunity arises. I was surprised to discover I emailed the Harvard Medical School CIO 12 months ago asking the question and just a week ago during a call with my CSC Healthcare colleague, he said he will pose the question to the HMS CIO. I completely forgot about my cold email when he mentioned their relationship, and I haven't heard back from either of them.

Just yesterday, I watched a Webinar on healthcare semantics where dbMotion's prominent employees presented a great story about their virtual patient record that could have womb to tomb medical information/records. But when I asked my data reference model question during the Q&A, the speaker had no clue how much data might accumulate over 100 years of life for an average patient in the US medical care system. I don't know when I'm going to let this question go and just start estimating myself then publish it for comment. That decision is coming as I turn my attention to the natural resources data reference model challenge.

24 April 2009

Speaking to the DC Chapter of Association of Computing Machinery

I was invited to talk about X3D for Enterprise Applications to the DC ACM after talking to the chapter president in Jan 09. Goto www.dcacm.org for the evening meeting details. It's a FREE event so please consider attending. This opportunity has forced me to speed up my investigation and creation of data models for natural persons' financial transaction profiles that covers a century of living. You might imagine NO ONE has such profiles to derive the data volume generated from that so transactions, let alone the data volume of the goods and services that would be captured in X3D formats for their mirror world database! But that is exactly the tone and twist I will deliver in my 40 minute talk. Data, data, data! It's all about the data and the information technology to manage it comes second. The graphical, acoustic and haptic user interfaces to the data are implied in the phrase "information technology".

I have yet to read an article, paper, essay, book chapter, brief, etc. that describes the data generation, accumulation and correlation of individuals' non-discretionary transactions with corporations (e.g., grocers, employers, churchs, schools) and governments (e.g., County land records office, State motor vehicle administration, internal revenue service) from womb to tomb. The publications I read that discuss the services provided to clients, customers, consumers and/or citizens focus on the information technology costs, energy consumption, security, and other IT characteristics but not the data in quantitative and individualistic terms. None of the publications I've read present even a sketch of the data dynamics in quantitative terms on an illustrative diagram of the data lifecycle in the content of the service offering or the person's life span. It's like the reader (everyone) just knows the details of the data elements, formats, values, etc.

My objective speaking to what I hope to be a standing room only audience (of 75) is mobile mirror world technology like Lifegraphs are a great idea to be the information nexus between the supply, value and customer chains of e-commerce/government/health/etc.. It's an old idea depending on how close to similar ideas you consider mine. I intend to create a desire in the audience to speak up and demand lifegraph components from the parties the do business with. I hope someone in government (legislative and regulatory) will be present because this idea goes no where fast without legislated/regulated demand created at all levels of government.

There is a book entitled "Mirror Worlds" by David Gelernter that I intend to read (256pp) before the talk so I can reference it in my opening remarks. The book was written soon after I graduated from USAFA, 1991.

16 February 2009

What's taking so long...

I've been home from Baghdad, Iraq for two months now. I haven't finished home inventory, my SysML diagrams are incomplete and I'm not clear what kind of business person or government official I should look to for help to take what appears to be a great idea of integration and invest resources into implementation. With so much "homework" to do and be available to my 9-mo old son and loving wife, it's going to be nothing less than challenging to expend the sweat labor into learning data and document-centric native XML database design, mobile edition APIs & OSes, ECMAScript, etc. to create that Blue Ocean I wrote about in my grant proposal.

I have a plan to incorporate LifeGraph Solutions LLC in the state of Maryland this month. I've been advised to do so and protect what little intellectual property I might be freely sharing with others while surveying the market. It only takes a couple hundred bucks to incorporate but the time to write the business plan and other legal documents takes more clock time than it does for me to earn that cash to file. Lucky for me, I managed three $150K 6-month Cooperative Agreements in Baghdad, Iraq that were exclusively dealing with planning Iraqi industrial operations using western styled business plans.

I do struggle with the idea of monetizing LifeGraph solutions with the consumer. I mean, I don't envision governments, corporations and citizens using their paypal account or their payment card to download LifeGraph basic or whatever version of a feature/function set. Maybe that will be a revenue stream someday. As I think about the way I want information accessed and presented to change the way and speed of my insight and investment, I foresee a need for professional services to author X3D scenes for LifeGraph software. Those who trade goods may want or their customers might demand the product in the X3D file format so it can be included in their LifeGraph. And when transactions don't have the X3D file, getting the product implemented in X3D better and faster than anyone else will generate more revenue than the competition. I wonder if I'm thinking the reinvention of accountants and bookkeepers core competencies as I write this. Since I haven't have any serious and open-minded conversations about LifeGraph Solutions with accountants or bookkeepers, I don't really think so.

I will be presenting LifeGraph Solutions to the Distric of Columbia Association of Computing Machinery (DC ACM) the first week of Jun 2009. I've accepted an invitatoin to do so as an impetus to complete a few grant related tasks I opened this post with. I joined the DC ACM recently and attended their Jan 09 program on Internet Forenics. There were 40 people in attendance for the hour-long brief at the AAAS building on New York Ave, NW. I just hope my talk attracts that many members at 7pm on a weekday before the start of summer.