<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>DOO YON KIM</title>
	<atom:link href="https://www.doyoki.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.doyoki.com</link>
	<description></description>
	<lastBuildDate>Wed, 04 Feb 2026 08:48:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>

 
	<item>
		<title>V2B (VR 2 BIZ)</title>
		<link>https://www.doyoki.com/v2b/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Wed, 11 Jul 2018 06:14:14 +0000</pubDate>
				<category><![CDATA[AR/VR]]></category>
		<category><![CDATA[Product]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=1181</guid>

					<description><![CDATA[&#160; VR for Industrial Designers with LEAP Motion &#160; A. Background V2B stands for VR 2 Biz. It is a term borrowed from the B2B and B2C which are biz 2 biz and biz to consumer. Many of the VR products right now is B2C that individuals play games with it. Also, in terms of creating [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>&nbsp;</p>
<p><span style="font-size: 12pt; font-family: Verdana, Geneva; color: #000000;">VR for Industrial Designers with LEAP Motion</span></p>
<p>&nbsp;</p>
<p><iframe title="V2B VR" src="https://player.vimeo.com/video/215090778?dnt=1&amp;app_id=122963" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write"></iframe></p>
<p><span style="font-size: 10pt;">A. Background</span></p>
<p><span style="font-size: 10pt;">V2B stands for VR 2 Biz. It is a term borrowed from the B2B and B2C which are biz 2 biz and biz to consumer. Many of the VR products right now is B2C that individuals play games with it. Also, in terms of creating new objects in the VR, it is B2C products that people love to sketch and enjoy the experience of doodling. The picture below depicts the user case of the drawing in the VR.</span></p>
<p><span style="font-size: 10pt;"><img decoding="async" class="alignnone wp-image-1190" src="https://www.doyoki.com/wp-content/uploads/2016/12/glen_keane_vr_drawing.0-595x335.gif" alt="glen_keane_vr_drawing-0" width="100%" height="679" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/glen_keane_vr_drawing.0-595x335.gif 595w, https://www.doyoki.com/wp-content/uploads/2016/12/glen_keane_vr_drawing.0-480x270.gif 480w" sizes="(max-width: 595px) 100vw, 595px" /></span></p>
<p><span style="font-size: 10pt;">In terms of B2B, the VR/AR is combining hand gesture to the VR/AR experience. Another user case would be an auto industry which creates an interior of the car and experience the passenger environment in the VR. The picture below shows the user using a hand to modify 3d object.</span></p>
<p><span style="font-size: 10pt;"><img decoding="async" class="alignnone wp-image-1191" src="https://www.doyoki.com/wp-content/uploads/2016/12/motorbike-1280x853-595x397.jpg" alt="motorbike-1280x853" width="100%" height="665" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/motorbike-1280x853-595x397.jpg 595w, https://www.doyoki.com/wp-content/uploads/2016/12/motorbike-1280x853-480x320.jpg 480w, https://www.doyoki.com/wp-content/uploads/2016/12/motorbike-1280x853-768x512.jpg 768w, https://www.doyoki.com/wp-content/uploads/2016/12/motorbike-1280x853-960x640.jpg 960w, https://www.doyoki.com/wp-content/uploads/2016/12/motorbike-1280x853.jpg 1280w" sizes="(max-width: 595px) 100vw, 595px" /></span></p>
<p><span style="font-size: 10pt;">B. Accuracy</span></p>
<p><span style="font-size: 10pt;">However, we need an accuracy in terms of modification.  We don&#8217;t want a dimension by eyeballing hand gestures. We need something like Autocad that has multiple digit accuracy.</span></p>
<p><span style="font-size: 10pt;"><img decoding="async" class="alignnone wp-image-1194" src="https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-18-at-22.43.11-595x372.png" alt="screen-shot-2016-10-18-at-22-43-11" width="100%
" height="778" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-18-at-22.43.11-595x372.png 595w, https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-18-at-22.43.11-480x300.png 480w, https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-18-at-22.43.11-768x480.png 768w, https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-18-at-22.43.11-960x600.png 960w, https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-18-at-22.43.11.png 1440w" sizes="(max-width: 595px) 100vw, 595px" /></span></p>
<p><span style="font-size: 10pt;">In order to achieve the accuracy with hand gestures, I take a look at the micrometer. Micrometer has a dial system, a thimble and a sleeve. The thimble and sleeve  are assigned to two different scales. Sleeve is set to 0.025 inch and the thimble is set to 0.001inch.  Taking this system, I assigned each finger with different scales to achieve the accuracy.  <img decoding="async" class="alignnone wp-image-1203" src="https://www.doyoki.com/wp-content/uploads/2016/12/Mahr_Micromar_40A_0-25mm_Micrometer-1-595x281.jpg" alt="mahr_micromar_40a_0-25mm_micrometer" width="100%" height="543" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/Mahr_Micromar_40A_0-25mm_Micrometer-1-595x281.jpg 595w, https://www.doyoki.com/wp-content/uploads/2016/12/Mahr_Micromar_40A_0-25mm_Micrometer-1-480x227.jpg 480w, https://www.doyoki.com/wp-content/uploads/2016/12/Mahr_Micromar_40A_0-25mm_Micrometer-1-768x363.jpg 768w, https://www.doyoki.com/wp-content/uploads/2016/12/Mahr_Micromar_40A_0-25mm_Micrometer-1-960x454.jpg 960w" sizes="(max-width: 595px) 100vw, 595px" /></span></p>
<p><span style="font-size: 10pt;">Here are the fingers that are assigned to the different scales.</span></p>
<p><span style="font-size: 10pt;"><img decoding="async" class="wp-image-1202 alignleft" src="https://www.doyoki.com/wp-content/uploads/2016/12/20161018_205915-595x335.jpg" alt="20161018_205915" width="100%" height="599" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/20161018_205915-595x335.jpg 595w, https://www.doyoki.com/wp-content/uploads/2016/12/20161018_205915-480x270.jpg 480w, https://www.doyoki.com/wp-content/uploads/2016/12/20161018_205915-768x432.jpg 768w, https://www.doyoki.com/wp-content/uploads/2016/12/20161018_205915-960x540.jpg 960w, https://www.doyoki.com/wp-content/uploads/2016/12/20161018_205915-640x360.jpg 640w" sizes="(max-width: 595px) 100vw, 595px" /></span></p>
<p><span style="font-size: 10pt;">C. 3 Axes &#8211; colors</span></p>
<p><span style="font-size: 10pt;">Since the accuracy is achievable, now it needs the three dimensions. Precisely, it needs X, Y, Z. X, Y, Z are color coded with RGB at almost every software. So, XYZ::RGB is set in terms of color. So in the V2B when the axes are changed the light color changes accordingly. </span></p>
<p><span style="font-size: 10pt;"><img fetchpriority="high" decoding="async" class="alignnone size-full wp-image-1204" src="https://www.doyoki.com/wp-content/uploads/2016/12/SitTF-1.png" alt="sittf-1" width="243" height="280" /></span></p>
<p><span style="font-size: 10pt;">D. Hand Gestures</span></p>
<p><span style="font-size: 10pt;">Since I am going to use the leap motion and the leap motion gives a distance between fingers, I take a look at the sign languages and made a chart on which distance that I need to consider. I made two versions. One is open-palm starting and the other is fist starting.</span></p>
<p><span style="font-size: 10pt;"><img decoding="async" class="alignnone wp-image-1184" src="https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-25-at-21.55.20-595x401.png" alt="screen-shot-2016-10-25-at-21-55-20" width="100%" height="765" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-25-at-21.55.20-595x401.png 595w, https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-25-at-21.55.20-480x323.png 480w, https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-25-at-21.55.20-768x517.png 768w, https://www.doyoki.com/wp-content/uploads/2016/12/Screen-Shot-2016-10-25-at-21.55.20.png 952w" sizes="(max-width: 595px) 100vw, 595px" /></span></p>
<p><span style="font-size: 10pt;">E. Feedback.</span></p>
<p><span style="font-size: 10pt;">After presenting to multiple users, people gave me a feedback on the hand gestures. Most of the people understood the 3Axes because it is natural to the engineers. Engineers uses X,Y,Z with three fingers, thumb, index and middle. However, people were confused with the scale. <a href="http://walczakheiss.com/" target="_blank" rel="noopener noreferrer">Marek Walczak</a> gave a nice feedback based on his architect background. Using a thumb as positive and pinky as negative. So, I finalized the hand gestures.</span></p>
<p>&nbsp;</p>
<p><span style="font-size: 10pt;"><img decoding="async" class="alignnone wp-image-1206" src="https://www.doyoki.com/wp-content/uploads/2016/12/instruct2-595x202.jpg" alt="instruct" width="100%" height="334" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/instruct2-595x202.jpg 595w, https://www.doyoki.com/wp-content/uploads/2016/12/instruct2-480x163.jpg 480w, https://www.doyoki.com/wp-content/uploads/2016/12/instruct2-768x261.jpg 768w, https://www.doyoki.com/wp-content/uploads/2016/12/instruct2-960x326.jpg 960w" sizes="(max-width: 595px) 100vw, 595px" /></span></p>
<p><span style="font-size: 10pt;">F. Development</span></p>
<p><span style="font-size: 10pt;">During the development, I made a wooden stand that I can put the leap motion and the smartphone at the same time. It allowed both hands to work with the leap motion freely also, it allowed me to document the development with the phone camera.</span></p>
<p><span style="font-size: 10pt;"><img decoding="async" class="alignnone wp-image-1196" src="https://www.doyoki.com/wp-content/uploads/2016/12/20161210_210910-595x446.jpg" alt="20161210_210910" width="100%" height="713" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/20161210_210910-595x446.jpg 595w, https://www.doyoki.com/wp-content/uploads/2016/12/20161210_210910-480x360.jpg 480w, https://www.doyoki.com/wp-content/uploads/2016/12/20161210_210910-768x576.jpg 768w, https://www.doyoki.com/wp-content/uploads/2016/12/20161210_210910-960x720.jpg 960w" sizes="(max-width: 595px) 100vw, 595px" />Also during the demo, I found it assigning each fingers with scale creates a hurdle to modify. As a result, I simplified with plus and minus as an intro to the new interface. <img decoding="async" class="alignnone wp-image-1287" src="https://www.doyoki.com/wp-content/uploads/2016/12/instruct-595x202.jpg" alt="" width="100%" height="376" srcset="https://www.doyoki.com/wp-content/uploads/2016/12/instruct-595x202.jpg 595w, https://www.doyoki.com/wp-content/uploads/2016/12/instruct-480x163.jpg 480w, https://www.doyoki.com/wp-content/uploads/2016/12/instruct-768x261.jpg 768w, https://www.doyoki.com/wp-content/uploads/2016/12/instruct-960x326.jpg 960w" sizes="(max-width: 595px) 100vw, 595px" /></span></p>
<p><span style="font-size: 10pt;">G. V2B.</span></p>
<p><span style="font-size: 10pt;">Here is the final development of the V2B with the oculus and the leap motion.</span></p>
<p><iframe title="V2B VR" src="https://player.vimeo.com/video/215090778?dnt=1&amp;app_id=122963" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write"></iframe></p>
<p>Pre &#8211; AR Test</p>
<p><iframe title="V2B Pre-AR" src="https://player.vimeo.com/video/215090712?dnt=1&amp;app_id=122963" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write"></iframe></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Experiment on social norm in chatting.</title>
		<link>https://www.doyoki.com/truly-synchronous-chat-fistbump/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Sun, 08 Jul 2018 06:26:31 +0000</pubDate>
				<category><![CDATA[Product]]></category>
		<category><![CDATA[WEBAPP]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=1128</guid>

					<description><![CDATA[&#160; Making a Truly Synchronous Chat by Matching Word Bubbles From the simple delivery of the letter by horse-back riding in ancient times, we know that people has demanded the synchronous information delivery between people. However, as a social norm even if we are in the live chat with video and sound, we know that it [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>&nbsp;</p>
<p><span style="font-size: 12pt; font-family: Verdana, Geneva; color: #000000;">Making a Truly Synchronous Chat by Matching Word Bubbles</span></p>
<p>From the simple delivery of the letter by horse-back riding in ancient times, we know that people has demanded the synchronous information delivery between people.</p>
<p>However, as a social norm even if we are in the live chat with video and sound, we know that it is impossible to have the truly synchronous delivery of the information.</p>
<p>The maximum limit of this demand only  reach to the point where one start talking the other person has to stop in order to listen to other person.</p>
<p>So, here I wanted to experiment, what if we talk truly instantly by chatting with matching word bubbles.</p>
<p>Through out the experiment, I found that even if we have this matching system, one user waits until the other user  stops writing. It is just a matter of time human brain processing.</p>
<p>&nbsp;</p>
<p><strong>LINK TO THE WORK:<a href="https://dyk286.itp.io:8080/index3.html"> https://dyk286.itp.io:8080/index3.html</a></strong></p>
<p><iframe loading="lazy" src="https://www.youtube.com/embed/Iiawutr3S5I" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>1. BACKGROUND</p>
<p>We have a common sense in conversation that we keep ourselves quiet when the other person speaks. It is true that overlapping one’s voice over the other is rude. However, we have this social norm implemented in the chatting too. The chatting is synchronous platform and we can read the text as we are typing.  So I want to make synchronous text chatting.I wanted to make a new way of chatting that people keep talking and overlap their voices over the other.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-810" src="https://www.doyoki.com/wp-content/uploads/2016/03/SYNC.jpg" alt="SYNC" width="1165" height="637" srcset="https://www.doyoki.com/wp-content/uploads/2016/03/SYNC.jpg 558w, https://www.doyoki.com/wp-content/uploads/2016/03/SYNC-480x262.jpg 480w, https://www.doyoki.com/wp-content/uploads/2016/03/SYNC-508x278.jpg 508w" sizes="auto, (max-width: 1165px) 100vw, 1165px" /></p>
<p>2. DEVELOPMENT</p>
<p>I was introduced with the video and the audio which changed the thought process. Also in terms of data ownership the p2p ID generation is very clear, it is data-free(no data stored).</p>
<p><img loading="lazy" decoding="async" class=" wp-image-811 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2016/03/Screen-Shot-2016-03-08-at-12.01.04-PM-595x277.png" alt="Screen Shot 2016-03-08 at 12.01.04 PM" width="1009" height="469" srcset="https://www.doyoki.com/wp-content/uploads/2016/03/Screen-Shot-2016-03-08-at-12.01.04-PM-595x277.png 595w, https://www.doyoki.com/wp-content/uploads/2016/03/Screen-Shot-2016-03-08-at-12.01.04-PM-480x224.png 480w, https://www.doyoki.com/wp-content/uploads/2016/03/Screen-Shot-2016-03-08-at-12.01.04-PM-508x237.png 508w, https://www.doyoki.com/wp-content/uploads/2016/03/Screen-Shot-2016-03-08-at-12.01.04-PM.png 712w" sizes="auto, (max-width: 1009px) 100vw, 1009px" />So, I worked on the css, html, javascript to make this project working. I used Tone.js to make sound just like other chatting webpage.  Here is the link it is working only when I run the server.js. As I am working on it I kept googling to get the answers that I want to have. As I am making this chatting, I changed it to refresh every 1minute. In this way it will make the users in the one starting point as time goes. It is the starting time of getting synchronous. So I tested the chatting with people.</p>
<p>3. User Test</p>
<p>First I wanted to have the functional completion. I chat with the friend in France and it proved the P2P is possible World-Wide and during this test I learned the lesson that in terms of video chatting, the video should go with the audio. However, in terms of drunken chat, the audio and video are constraints because it breaks the need of word bubble. So, I muted the audio and shrank the size of the video. I conceptualized it as an on-air kind sign of the broadcasting.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-1134" src="https://www.doyoki.com/wp-content/uploads/2016/09/20160501_211924-595x335.jpg" alt="20160501_211924" width="1160" height="653" srcset="https://www.doyoki.com/wp-content/uploads/2016/09/20160501_211924-595x335.jpg 595w, https://www.doyoki.com/wp-content/uploads/2016/09/20160501_211924-480x270.jpg 480w, https://www.doyoki.com/wp-content/uploads/2016/09/20160501_211924-768x432.jpg 768w, https://www.doyoki.com/wp-content/uploads/2016/09/20160501_211924-960x540.jpg 960w, https://www.doyoki.com/wp-content/uploads/2016/09/20160501_211924-640x360.jpg 640w, https://www.doyoki.com/wp-content/uploads/2016/09/20160501_211924.jpg 1245w" sizes="auto, (max-width: 1160px) 100vw, 1160px" /></p>
<p>4. Design/Naming</p>
<p>I had a hard time struggling with the design and colors. At the end of the design, I made the design more seamless and made it more crisp by shadow. Also as I design the div I thought of the matching word bubbles as the fistbump. So I made a logo looks like a fist-bump.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-1135" src="https://www.doyoki.com/wp-content/uploads/2016/09/screen-shot-2016-05-10-at-01-51-16-595x499.png" alt="screen-shot-2016-05-10-at-01-51-16" width="1141" height="957" srcset="https://www.doyoki.com/wp-content/uploads/2016/09/screen-shot-2016-05-10-at-01-51-16-595x499.png 595w, https://www.doyoki.com/wp-content/uploads/2016/09/screen-shot-2016-05-10-at-01-51-16-480x402.png 480w, https://www.doyoki.com/wp-content/uploads/2016/09/screen-shot-2016-05-10-at-01-51-16-768x644.png 768w, https://www.doyoki.com/wp-content/uploads/2016/09/screen-shot-2016-05-10-at-01-51-16.png 804w" sizes="auto, (max-width: 1141px) 100vw, 1141px" /></p>
<p>5. Modification</p>
<p>I modified the design into something that I like not following Google&#8217;s material design because material design does not covey my philosophy, fun, human interactivity.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-1290" src="https://www.doyoki.com/wp-content/uploads/2016/09/Screen-Shot-2017-03-03-at-17.09.07-595x312.png" alt="" width="980" height="514" srcset="https://www.doyoki.com/wp-content/uploads/2016/09/Screen-Shot-2017-03-03-at-17.09.07-595x312.png 595w, https://www.doyoki.com/wp-content/uploads/2016/09/Screen-Shot-2017-03-03-at-17.09.07-480x252.png 480w, https://www.doyoki.com/wp-content/uploads/2016/09/Screen-Shot-2017-03-03-at-17.09.07-768x403.png 768w, https://www.doyoki.com/wp-content/uploads/2016/09/Screen-Shot-2017-03-03-at-17.09.07-960x504.png 960w, https://www.doyoki.com/wp-content/uploads/2016/09/Screen-Shot-2017-03-03-at-17.09.07.png 1313w" sizes="auto, (max-width: 980px) 100vw, 980px" /></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>LETS</title>
		<link>https://www.doyoki.com/lets/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Fri, 06 Jul 2018 22:40:15 +0000</pubDate>
				<category><![CDATA[MOBILE APP]]></category>
		<category><![CDATA[Product]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=1617</guid>

					<description><![CDATA[❒ I made iOS App lele! &#160; ❒ LETS : Efficient Platform to Make People Social in Real World. ❒ Online services promote strengthening the physical community by joining the online community these days. However, after people joining the online community based on the physical relationship, the actual physical interaction weakens over time. We have [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>❒ I made iOS App lele!</p>
<p><a href="https://apps.apple.com/us/app/lele/id1472869580?ls=1" target="_blank" rel="noopener noreferrer"><img loading="lazy" decoding="async" class="aligncenter wp-image-3168 size-thumbnail" src="https://www.doyoki.com/wp-content/uploads/2018/07/app-store-coupon-2048x708-480x166.png" alt="" width="480" height="166" srcset="https://www.doyoki.com/wp-content/uploads/2018/07/app-store-coupon-2048x708-480x166.png 480w, https://www.doyoki.com/wp-content/uploads/2018/07/app-store-coupon-2048x708-800x277.png 800w, https://www.doyoki.com/wp-content/uploads/2018/07/app-store-coupon-2048x708-768x266.png 768w, https://www.doyoki.com/wp-content/uploads/2018/07/app-store-coupon-2048x708-1200x415.png 1200w, https://www.doyoki.com/wp-content/uploads/2018/07/app-store-coupon-2048x708-1860x643.png 1860w, https://www.doyoki.com/wp-content/uploads/2018/07/app-store-coupon-2048x708.png 2048w" sizes="auto, (max-width: 480px) 100vw, 480px" /></a></p>
<p>&nbsp;</p>
<p class="_1LoK">❒ LETS : Efficient Platform to Make People Social in Real World.</p>
<p><iframe loading="lazy" title="ITP Thesis Week 2017: Doo Yon Kim - ? LETS :" src="https://player.vimeo.com/video/216587729?dnt=1&amp;app_id=122963" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write"></iframe></p>
<p>❒ Online services promote strengthening the physical community by joining the online community these days. However, after people joining the online community based on the physical relationship, the actual physical interaction weakens over time. We have hundreds of Facebook friends but it only ties strongly in online but not in actual life. So I wanted make an &#8216;efficient platform&#8217; that quickly completes the user/group interactions with the &#8216;actual behavior.&#8217;</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-1619" src="https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-5-595x335.jpg" alt="" width="595" height="335" srcset="https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-5-595x335.jpg 595w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-5-480x270.jpg 480w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-5-768x432.jpg 768w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-5.jpg 960w" sizes="auto, (max-width: 595px) 100vw, 595px" /></p>
<p>&nbsp;</p>
<p>❒ In 2015, September 15 at Boston, there was a premier of the Black Mass. From this event, the one elderly woman got viral over the internet. She became an online hero because even if there was Johnny Depp she was the only person not taking picture. A futurist, Ray Kurzweil predicts digital implant in human body. He questions people’s dependency on smart phones are an example of brain extender. As we are counting on the digital devices we might lost the reality in our lives. Then, what does the digital devices provide for us? Digital devices gave us efficiency in our lives. We use the emojis to express one’s feeling as a shorthand in our text messaging. So we can see that people are looking of the efficiency in lives.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-1621" src="https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-1-595x335.jpg" alt="" width="595" height="335" srcset="https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-1-595x335.jpg 595w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-1-480x270.jpg 480w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-1-768x432.jpg 768w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-1.jpg 960w" sizes="auto, (max-width: 595px) 100vw, 595px" /></p>
<p>&nbsp;</p>
<p>In terms of text messaging when there are multiple people, it is hard to get to the point when we are scheduling something together. One might ask for Friday night, another can’t make it on Friday, and the other is looking for other time. So, I wanted an efficient way to scheduling. Changing the process from “Talk and do co-activity” to “Decide co-activity and talk” will give a different behavior.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-1622" src="https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-2-595x335.jpg" alt="" width="595" height="335" srcset="https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-2-595x335.jpg 595w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-2-480x270.jpg 480w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-2-768x432.jpg 768w, https://www.doyoki.com/wp-content/uploads/2017/06/Untitled-2.jpg 960w" sizes="auto, (max-width: 595px) 100vw, 595px" /></p>
<p>At ITP, people enjoy playing foosball. However finding three other players is a painful process. As time goes we made our efficient gestures. As soon as people make an eye contact both fists are shaken. As a reply, people either nodded or shaken their heads. From here, I wanted to make an efficient platform between friends. It involves a User Interface and new language. So, I wanted a simplest language to kickstart an action. This web app quickly disseminates the activities that any user wants to do to every friends. Before developing an app, I found the key how to express human activities; 5W1H. So, I trimmed it to 4, What, Where, When, Who. Each topic will have options. What: Drink, Sports, Food, Attractions. Where: 0 mile, 1 mile, 3 mile, 5mile. When: Now, 1hour later, 2hour later, 3 hour later, 4hour later. Who: 1person, 2people, 3 people, 4 people. As an user/erector, it will quickly click icons and send it to all of the friends and the friends will join the chat. In terms of language I wanted make a new application of emoji to describe a schedule.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-2089" src="https://www.doyoki.com/wp-content/uploads/2015/12/LETS-FLOW.001-1-595x333.jpeg" alt="" width="976" height="546" srcset="https://www.doyoki.com/wp-content/uploads/2015/12/LETS-FLOW.001-1-595x333.jpeg 595w, https://www.doyoki.com/wp-content/uploads/2015/12/LETS-FLOW.001-1-480x269.jpeg 480w, https://www.doyoki.com/wp-content/uploads/2015/12/LETS-FLOW.001-1-768x430.jpeg 768w, https://www.doyoki.com/wp-content/uploads/2015/12/LETS-FLOW.001-1-960x537.jpeg 960w, https://www.doyoki.com/wp-content/uploads/2015/12/LETS-FLOW.001-1.jpeg 1024w" sizes="auto, (max-width: 976px) 100vw, 976px" /></p>
<p>Machine Learning will soon mock human brains in every aspects even in anything that deals with human emotions. We know many of the pop music now are computer-optimized. Industrial revolution replaced the human muscles. Now, machine learning is replacing human brains. For example, if we have nothing but autonomous driving cars, would cars need signal lights? If every car is connected, we would not even need traffic lights. In the future, something we had as a tool for communication will not be needed. So, the chatting that I am building is going for the future, you don&#8217;t talk, you don&#8217;t have to learn same language as your friend. Only thing you need is just understanding of the icons. If you watched Facebook F8 2016, the key is nothing but better expression and better share. Facebook used to have a solution to express and snapchat took over by video. Now facebook is competing against by adding a new features. So, we can see the trend how it is evolving.</p>
<p style="text-align: center;">Text ⇢ Picture ⇢ Video</p>
<p>So, I am heading to the other side of expression, even before text, the Icon. It is better sharing because they share a moment by doing same activity.</p>
<p style="text-align: center;">&#8230; &#8230; &#8230;</p>
<p>A. Background<br />
When I was at the NYU ITP, people loved to play foosball but it has been painful to actually play… Gathering 4 people on the foosball was painful that we had go and ask people ‘you wanna play?’So, I wanted a simplest way to kickstart an action. This web app quickly disseminates the activities that an user wants to do to every friends. Before developing an app, I found the key how to express human activities; 5W1H. So, I trimmed it to 4, What, Where, When, Who. As an user/erector, it will quickly click icons and send it to all of the friends and the friends will accept/deny the message.</p>
<p>B. Machine Learning &amp; Utopia<br />
Many people questioned me about the term, &#8216;Utopia.&#8217; I borrowed it from Kanye West&#8217;s interview.<br />
&#8220;What’s your version of utopia? I don’t think people are going to talk in the future. They’re going to communicate through eye contact, body language, emojis, signs.    Imagine that. If everyone was forced to learn sign language. &#8221;</p>
<blockquote class="wp-embedded-content" data-secret="9ENecfx01U"><p><a href="https://www.surfacemag.com/articles/kanye-west-art-design-never-compromise/">Kanye West: Free Form</a></p></blockquote>
<p><iframe loading="lazy" class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;Kanye West: Free Form&#8221; &#8212; SURFACE" src="https://www.surfacemag.com/articles/kanye-west-art-design-never-compromise/embed/#?secret=8PSKpc9mqd#?secret=9ENecfx01U" data-secret="9ENecfx01U" width="600" height="338" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe><br />
The reason why I liked the term &#8216;Utopia&#8217; was because it is the future we are heading. Machine Learning will soon mock human brains in every aspects even in anything that deals with human emotions. We know many of the pop music now are computer-optimized. Industrial revolution replaced the human muscles. Now, machine learning are replacing human brains. For example, if we have nothing but autonomous driving cars, would cars need signal lights? If every car is connected, we would not even need traffic lights. In the future, something we had as a tool for communication will not be needed. So, the chatting that I am building is going for the future, you don&#8217;t talk, you don&#8217;t have to learn same language as your friend. Only thing you need is just understanding the icons.</p>
<p>C. Exclusive VS Inclusive<br />
If you look at the facebook, snapchat, instagram they all have an exclusiveness. You can limit your post to your friends excluding people. It is because of facebook which started for the people who have emails ending with &#8220;.edu.&#8221; Lets that I am creating is inclusive to every friends on the list because it is action oriented not friendliness-oriented. Action oriented means instead of being behind the computer, it makes people to get together and do the activities together.</p>
<p>D. Expression &amp; Share<br />
If you watched Facebook F8 2016, the key is nothing but better expression and better share. Facebook used to have a solution to express well and snapchat took over by video. Now facebook is using instagram by adding a new feature story. So, we can see the trend how it is evolving. Text&gt;Picture&gt;Video. So, I am heading to the other side of expression, even before text, the Icon.</p>
<p style="text-align: center;">Icon ⇠   ◉   Text ⇢ Picture ⇢ Video</p>
<p>Instead of competing with other platforms, I am targeting the other side of expression. It is better sharing because they share a moment by doing same activity. We don&#8217;t need VR to mock being together. https://youtu.be/ouE6qyTc-l0?t=8m43s</p>
<p>E. Google VS Instagram<br />
When you search do you lie what you need? When you post picture on the instagram do you post anything ugly? The data Google is collecting is pure data. People do not lie what they are searching for. On the other hand, data that instagram collects are not pure because people on the instagram want to post something that can trigger more like buttons. The data Lets will collect will be pure and sincere because you don’t lie to your friends especially when you are scheduling something together. So the future community that lets is going for is Utopian that people do not lie but are honest.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>Original Design</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-2516" src="https://www.doyoki.com/wp-content/uploads/2015/12/LETS-2-800x856.png" alt="" width="800" height="856" srcset="https://www.doyoki.com/wp-content/uploads/2015/12/LETS-2.png 800w, https://www.doyoki.com/wp-content/uploads/2015/12/LETS-2-480x514.png 480w, https://www.doyoki.com/wp-content/uploads/2015/12/LETS-2-768x822.png 768w" sizes="auto, (max-width: 800px) 100vw, 800px" /></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SPACE TRAVELLING</title>
		<link>https://www.doyoki.com/space-travelling/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Tue, 03 Jul 2018 02:08:49 +0000</pubDate>
				<category><![CDATA[OTHER]]></category>
		<category><![CDATA[Product]]></category>
		<category><![CDATA[WEBAPP]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=485</guid>

					<description><![CDATA[WHAT&#8217;S BEHIND Proposal The original plan was to make a visualization of mathematical equation. If you are familiar with Maya there is phong function. The function that changes the surface of the object in 3D. However, the Phong is named after a mathematician whose name is Phong. So, what we see on the screen is [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><iframe loading="lazy" style="border-radius: 6px;" src="https://www.doyoki.com/project/icm/finalfin2/" width="1000" height="700" frameborder="0" allowfullscreen="allowfullscreen"></iframe><br />
WHAT&#8217;S BEHIND</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-555 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2016/09/screen-shot-2015-11-30-at-3-43-28-pm.png" alt="Screen Shot 2015-11-30 at 3.43.28 PM" width="1440" height="747" /></p>
<ol>
<li>Proposal<br />
The original plan was to make a visualization of mathematical equation. If you are familiar with Maya there is phong function. The function that changes the surface of the object in 3D. However, the Phong is named after a mathematician whose name is Phong. So, what we see on the screen is often from the mathematical equation done by some mathematicians. So, my goal was to make a visual representation of the mathematical equation.</li>
<li>Average Equation<br />
So, I started with a simple equation using array. The reason that I thought of using array is that because the value that influence the visual got to be changing over time. So, I started with average equation. It was easy approach because the array can be divided by the length of the array. So, mouse X values are summed up in the array and the sum of mouseX is divided by the length of the array. I divided the windowwidth into two that if the mouse X is on the left side, it will be the negative value and if the mouse X is on the right side it would be positive value. In this way the visual would be able to increase and decrease its shape over time.</li>
<li>User Testing A<br />
I showed this to the class and classmates were perplexed by the complexity. They did not get the what the mouseX is doing and why the visual is keep changing. I needed to explain what the logic is behind and clarify the user interaction.</li>
<li>User Testing  B<br />
I simplified the visual and showed it to one of the ITPers and I got a suggestion of using a webcam. So, I created the webcam and mouseX interaction. However, even after, the mouse X was not responding fast enough. I needed something that the value changes dramatically. If you put your mouse on the right side for a long time then, you would be able to see the lines moving and moving objects over the webcam will change the movements of the lines. Also if you put your mouse on the left side for a long time then, you would be able to see the lines are diminishing. <a href="https://www.doyoki.com/project/icm/finalfin3/" target="_blank" rel="noopener">https://www.doyoki.com/project/icm/finalfin3/</a></li>
<li>Thanksgiving Break<br />
I was in the bed, dreamed that I was in the darkness and saw black ellipses moving. So, I woke up and I just opened up the laptop and started to make a black ellipse changing its radius. However, the ellipse was only getting big but small. So, I changed the if statement adding &#8220;c = -c;&#8221; Now, ellipse was changing big and small.</li>
<li>Exponential Equation<br />
So, I played more with ellipse making more like my dream. I needed an exponential equation. So, I changed the code from  &#8220;c = c d&#8221; to &#8220;c = c d*f.&#8221; The &#8220;f&#8221; is the counter in the for loop so it would change the value in more dramatic as:<br />
1 1 * 1 = 2<br />
2 2 * 2 = 6<br />
6 6 * 3 = 24<br />
And after changing the equation, when I looked at this sphere getting big and small, I felt like I was in the space approaching to and flying away from the star.<br />
<img loading="lazy" decoding="async" class="size-full wp-image-559 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2016/09/screen-shot-2015-12-02-at-4-51-14-pm.png" alt="Screen Shot 2015-12-02 at 4.51.14 PM" width="1440" height="743" /></li>
<li>Final<br />
I added more colors, more ellipses later. And I eliminated the mouse and the webcam interaction because I wanted to eliminate user&#8217;s misunderstandings. Here is the final.<br />
<a href="https://www.doyoki.com/project/icm/finalfin2/" target="_blank" rel="noopener">https://www.doyoki.com/project/icm/finalfin2/<br />
</a>And here is the visual interpretation of code. You can see the radius of the ellipses are changing in bigger scale.<iframe loading="lazy" src="https://www.doyoki.com/project/icm/finalfin/" width="1000" height="700" frameborder="0" allowfullscreen="allowfullscreen"></iframe><br />
<a href="https://www.doyoki.com/project/icm/finalfin/" target="_blank" rel="noopener">https://www.doyoki.com/project/icm/finalfin/<br />
</a>And here is the code.<br />
<a href="https://gist.github.com/kdoodoo/45074466d268ed84d66a" target="_blank" rel="noopener">https://gist.github.com/kdoodoo/45074466d268ed84d66a</a></li>
<li>Conclusion<br />
ICM got me a better understanding of the code. I was in the computer engineering major for the first two years of college and back then I enjoyed elective courses like design 101 more rather than doing the labs on C# and  C . ICM got me better that I like coding now. And I cannot believe there are only three semesters left at the ITP and it feels much different from the first semester of the 4 years of college. I am running out of time. ITP gotta be 4 year pr0gram.</li>
</ol>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>PERSPECTIVE IN VR</title>
		<link>https://www.doyoki.com/experimenting-perspective-of-the-vr/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Tue, 03 Jul 2018 01:22:43 +0000</pubDate>
				<category><![CDATA[AR/VR]]></category>
		<category><![CDATA[Think]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=1409</guid>

					<description><![CDATA[Making Immersive VR through the Change of the Perspective. So, I have been interested in the height of the vision  that I wanted to try the vision at the lower level. So I made the VR with the vision of the dog. I used the fig rig, rope and paper cups. First , I gave [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Making Immersive VR through the Change of the Perspective.</p>
<p><img loading="lazy" decoding="async" class="wp-image-1410 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2017/04/20160917_190153-595x230.jpg" alt="" width="792" height="306" srcset="https://www.doyoki.com/wp-content/uploads/2017/04/20160917_190153-595x230.jpg 595w, https://www.doyoki.com/wp-content/uploads/2017/04/20160917_190153-480x185.jpg 480w, https://www.doyoki.com/wp-content/uploads/2017/04/20160917_190153-768x297.jpg 768w, https://www.doyoki.com/wp-content/uploads/2017/04/20160917_190153.jpg 940w" sizes="auto, (max-width: 792px) 100vw, 792px" /></p>
<p>So, I have been interested in the height of the vision  that I wanted to try the vision at the lower level. So I made the VR with the vision of the dog. I used the fig rig, rope and paper cups.</p>
<p><img loading="lazy" decoding="async" class="wp-image-1411 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2017/04/20160917_162917-595x335.jpg" alt="" width="858" height="483" srcset="https://www.doyoki.com/wp-content/uploads/2017/04/20160917_162917-595x335.jpg 595w, https://www.doyoki.com/wp-content/uploads/2017/04/20160917_162917-480x270.jpg 480w, https://www.doyoki.com/wp-content/uploads/2017/04/20160917_162917-768x432.jpg 768w, https://www.doyoki.com/wp-content/uploads/2017/04/20160917_162917-640x360.jpg 640w, https://www.doyoki.com/wp-content/uploads/2017/04/20160917_162917.jpg 940w" sizes="auto, (max-width: 858px) 100vw, 858px" /></p>
<p>First , I gave it a test shot with the fig rig. I wanted to cover the fig rig from the Theta’s view.  So I used the paper cups to cover up the fig rig and it made sense that dogs sometimes wear a collar for the medical purpose.</p>
<p><img loading="lazy" decoding="async" class="wp-image-1412 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2017/04/19-595x381.jpg" alt="" width="860" height="551" srcset="https://www.doyoki.com/wp-content/uploads/2017/04/19-595x381.jpg 595w, https://www.doyoki.com/wp-content/uploads/2017/04/19-480x307.jpg 480w, https://www.doyoki.com/wp-content/uploads/2017/04/19-768x492.jpg 768w, https://www.doyoki.com/wp-content/uploads/2017/04/19-960x614.jpg 960w, https://www.doyoki.com/wp-content/uploads/2017/04/19.jpg 1000w" sizes="auto, (max-width: 860px) 100vw, 860px" /></p>
<p>Second, I added a rope to make it more realistic. After the first shot, David suggested the existence of human interaction with the dog. So, he was on the shot with great acting which made the VR in great immersiveness. When I watched it with the Google cardboard, I felt like David is touching on me. I asked Paula to try it and she also said she felt like someone is touching. David brought up a perspective that there is no active will in the experience. It made the user feel passive.</p>
<p>Rotate the Camera View. It starts from the rear view.</p>
<p><iframe loading="lazy" title="Wally the dog&#039;s view" width="640" height="360" src="https://www.youtube.com/embed/-bMIpfBwYLo?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>ANXIETY EXPLOSIVE</title>
		<link>https://www.doyoki.com/interactive-music/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Wed, 23 May 2018 23:05:42 +0000</pubDate>
				<category><![CDATA[OTHER]]></category>
		<category><![CDATA[Product]]></category>
		<category><![CDATA[WEBAPP]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=908</guid>

					<description><![CDATA[LINK1 : Don &#8211; Yes &#160; The graphic is from the WEBGL of P5.JS. I made a music composed with FL Studio which is YES. The other is Grace&#8217;s recent music, You don&#8217;t own me. So, YES conveys my thought process well. The music is continuously in loop. The musical sense is in a normal [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><span style="color: #ff0000;"><a style="color: #ff0000;" href="https://www.doyoki.com/project/interactivemusic/fin/intmusfin%20copy%202.html" target="_blank" rel="noopener">LINK1 : Don &#8211; Yes</a></span></p>
<p>&nbsp;</p>
<p><img loading="lazy" decoding="async" class=" wp-image-947 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.33.33-595x299.png" alt="Screen Shot 2016-04-26 at 23.33.33" width="852" height="533" /> <img loading="lazy" decoding="async" class="wp-image-952 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.34.25-1-595x372.png" alt="Screen Shot 2016-04-26 at 23.34.25" width="852" height="533" srcset="https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.34.25-1-595x372.png 595w, https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.34.25-1-480x300.png 480w, https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.34.25-1-768x480.png 768w, https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.34.25-1-960x600.png 960w, https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.34.25-1-1240x775.png 1240w, https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.34.25-1-508x318.png 508w, https://www.doyoki.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-26-at-23.34.25-1.png 1440w" sizes="auto, (max-width: 852px) 100vw, 852px" /></p>
<p>The graphic is from the WEBGL of P5.JS. I made a music composed with FL Studio which is YES. The other is Grace&#8217;s recent music, You don&#8217;t own me. So, YES conveys my thought process well.</p>
<p>The music is continuously in loop. The musical sense is in a normal state that audience enjoys the music. Once the user is curious about the interaction, it might click. As the user click the mouse button, there is a transition to other view.</p>
<p>If that view is in temptation one will continuously click to explore as the the other music interferes the sense of normality.</p>
<p>And if one keeps clicking on the mouse the music will trans into abnormal state which will lose the sense of the music but it will be a noise.</p>
<p>As a result, I named the work, Anxiety Explosive.</p>
<p><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2666.png" alt="♦" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Technical Note: I used the Tone.js, P5.js and WEBGL. The boxes are outcome of analyzer of the tone.js that the box&#8217;s XYZ dimensions are stretching and contracting and also it rotates related to the analyzer. Also, when the mouse is pressed, the camera is triggered related to mouse X and Y positions.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Would you auto your milk with bread?</title>
		<link>https://www.doyoki.com/would-you-auto-your-milk/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Sat, 03 Mar 2018 15:50:40 +0000</pubDate>
				<category><![CDATA[Think]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=2556</guid>

					<description><![CDATA[“People also viewed” and “People also bought” are great marketing tools of digital shopping experience. Cling to human shopping demand is a key to increase the sales. Drawing human curiosity is a real effective nudge. The other nudge we see often is “Orders over $100 are free shipping.” Making a consumer to target the amount [&#8230;]]]></description>
										<content:encoded><![CDATA[<div class="wp-block-image">
<figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.doyoki.com/wp-content/uploads/2018/03/milk001-800x448.jpeg" alt="" class="wp-image-2561" width="580" height="324" srcset="https://www.doyoki.com/wp-content/uploads/2018/03/milk001-800x448.jpeg 800w, https://www.doyoki.com/wp-content/uploads/2018/03/milk001-480x269.jpeg 480w, https://www.doyoki.com/wp-content/uploads/2018/03/milk001-768x430.jpeg 768w, https://www.doyoki.com/wp-content/uploads/2018/03/milk001.jpeg 1024w" sizes="auto, (max-width: 580px) 100vw, 580px" /></figure>
</div>


<p>“People also viewed” and “People also bought” are great marketing tools of digital shopping experience. Cling to human shopping demand is a key to increase the sales. Drawing human curiosity is a real effective nudge. The other nudge we see often is “Orders over $100 are free shipping.” Making a consumer to target the amount of money is great tool. If products are mostly $99, it is much more effective. </p>



<p>Word processor made people to write text easily and one of the evolution we see are the webpages. Physical paper text is on the screen. List of information with human economy is truly innovative and now we love amazon and alibaba. Machine learning in these conventional platform has evolved by curating products or content like Netflix for the individuals. </p>



<p>How would you implement these nudges into table top artificial intelligence? At NAVER CONNECT 2018, CEO, Han Seong Sook discusses the difficulty in the artificial intelligence. “When one is shopping a milk at the supermarket, naturally the bread gets an attention but it is difficult in artificial intelligence. Ordering a milk through the table top speaker is just a simple declaration.” </p>



<p>Consumers are rational when they face Alexa. They give orders and that is it. However, if we tweak “Consumers are rational” we also know consumers are irrational. This is a simple finding we learned over the decades. We can see this irrationality through game. People buy items for their personified avatars in the game. In japan we see otakus getting married with dolls. Personification of Alexa and the ability to individualize Alexa can draw more value to the consumers. Just like consumers spend money on their virtual characters in the game.</p>


<div class="wp-block-image">
<figure class="aligncenter is-resized"><img loading="lazy" decoding="async" src="https://www.doyoki.com/wp-content/uploads/2018/03/milk002-800x448.jpeg" alt="" class="wp-image-2562" width="580" height="324"/></figure>
</div>


<p>The other option is present in chatting application. The facebook messenger is suggesting food delivery while two users are talking about having lunch. The only downfall is they were discussing having a lunch at the restaurant. Just like allowing access location of the smartphone, we might need to allow an access to our everyday conversation. Instead of calling out the names, “Alexa” or “OK, google” we might have to let the artificial intelligence listen our everyday conversation. Soon, Alexa will intervene our conversation: “Any delivery for the lunch?”</p>



<p>Gamification in marketing is effective in games. However, monetization by the individualizing an avatar made an instant growth in sales but failed over time just like cyworld by SK in Korea. As soon as the user feels lame to decorate its virtual avatar, soon it will lose the users. Keeping clean and less messy with advertising is a key in user experience but we cannot keep it too dry which cannot draw any attention to bread when it is buying a milk. Letting artificial intelligence to listening to our everyday conversation is controversial. We know keeping a mouth shut is an important role of a butler from the Downton Abbey. </p>



<p>We might face the need of an auto-mode. Just like monthly curation and subscription, we can auto our lives. Auto weekly food delivery and auto monthly clothes delivery based on artificial intelligence analysis. Soon Alexa will deliver your bread right next to your milk. </p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>STALACTITE</title>
		<link>https://www.doyoki.com/stalactite/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Fri, 09 Feb 2018 00:53:03 +0000</pubDate>
				<category><![CDATA[OTHER]]></category>
		<category><![CDATA[Product]]></category>
		<category><![CDATA[WEBAPP]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=683</guid>

					<description><![CDATA[It is a visualization of STALACTITE. FULL SCREEN1 FULL SCREEN2 &#160;]]></description>
										<content:encoded><![CDATA[<p>It is a visualization of STALACTITE.</p>
<p><a href="https://www.doyoki.com/project/icm/icm_week6/stalactite/index.html" target="_blank" rel="noopener">FULL SCREEN1</a></p>
<p><a href="https://www.doyoki.com/project/icm/icm_week6/stalactite2/index.html" target="_blank" rel="noopener">FULL SCREEN2</a></p>
<p>&nbsp;</p>
<p><iframe loading="lazy" src="https://www.doyoki.com/project/icm/icm_week6/stalactite/index.html" width="1000" height="300" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><iframe loading="lazy" src="https://www.doyoki.com/project/icm/icm_week6/stalactite2/index.html" width="1000" height="300" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Artificial Intelligence is Where Sales at.</title>
		<link>https://www.doyoki.com/artificial-intelligence-is-where-sales-at/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Thu, 07 Dec 2017 12:31:20 +0000</pubDate>
				<category><![CDATA[Think]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=2524</guid>

					<description><![CDATA[Table-top Alexa became a convention of how to implement human computer interaction in household. Amazon Alexa is well thought product because as it is from the distribution company, making a dominance in human life is priority here. Table-top was the backbone of the household economy where American Dad can easily make decision on consumption. On [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Table-top Alexa became a convention of how to implement human computer interaction in household. Amazon Alexa is well thought product because as it is from the distribution company, making a dominance in human life is priority here. Table-top was the backbone of the household economy where American Dad can easily make decision on consumption. On the other hand, speed followers, Google started catching up with the table-top Google home. Google which controls the flow of ubiquitous data lost the track on how to deal with the Table-top convention. Google Android in the smartphone is where all the data collected and enters to the digital. Google played well in data collection in the Web and in the smartphone by marketing editology. People pretty much implant the Android smartphone on their palms.  However, table-top convention confuses the people and Google also does not know how to make sales out of it. Google Next and smart grid can collect data and can be sold to secondary party but lack of convenience and practicality in table-top Google home can lose consumers where all the data is from. On contrary, Alexa became a portal of Amazon.com that consumers love the benefit of product suggestion. It plays the role of a butler who can buy stuff and relieve the effort of shopping. There is almost zero barrier because consumers would rather use voice recognition which might give a little hard time than driving out to the retailers. Google now started suggesting products just like advertisement in the Google search. Would consumers welcome the interference? The experiment will continue in the market. Google should start take a look at how to lower the hurdles and how to attract people from compared to two thumbs. Korean SK also introduced table-top Nugu to Korean market and other companies started introducing more. Korean conglomerate, SK which already runs 11St(Korean shopping as Amazon) can benefit from the table-top convention, suggesting products. Now, Apple also entered the table-top market. When it hits the market we will be able to see how they interpret its purpose of making sales.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-2531" src="https://www.doyoki.com/wp-content/uploads/2017/12/asdasd-1.png" alt="" width="995" height="557" srcset="https://www.doyoki.com/wp-content/uploads/2017/12/asdasd-1.png 500w, https://www.doyoki.com/wp-content/uploads/2017/12/asdasd-1-480x269.png 480w" sizes="auto, (max-width: 995px) 100vw, 995px" /></p>
<p>Samsung recently launched Bixby where it resides in the smartphone just like its popularity in smartphones. Where does Samsung make sales? How would Bixby create sales?  Samsung already has a successful platform Samsung Pay. Samsung should start implement Samsung Pay into Bixby. All other white appliance should also have both Bixby and Samsung Pay. If it is a refrigerator with Bixby and Samsung Pay, it can instantly work with other consumer industries to order products. Galaxy phones already have a biometric data collecting like iris, finger print scanner which Alexa cannot compete against.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-2532" src="https://www.doyoki.com/wp-content/uploads/2017/12/hohi.png" alt="" width="1243" height="696" srcset="https://www.doyoki.com/wp-content/uploads/2017/12/hohi.png 500w, https://www.doyoki.com/wp-content/uploads/2017/12/hohi-480x269.png 480w" sizes="auto, (max-width: 1243px) 100vw, 1243px" /></p>
<p>Autonomous driving is another form of artificial intelligence and people in the industry already prescribed HaaS(Hardware as a Service). GM is moving forward with subscription service in Cadillac brand. Consumers can subscribe cars and change cars monthly. The benefit of autonomous driving is well structured in this model because it can create direct sales. Subscribers can order cars and the autonomous cars will be delivered, basically will deliver by itself. Making sales out of the artificial intelligence is a key to success and companies like Amazon, GM which delivers the tangible products are moving forward looking at where to make sales.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Human Muscle Mimicry Robot Hand</title>
		<link>https://www.doyoki.com/real-hand/</link>
		
		<dc:creator><![CDATA[kdoodoo]]></dc:creator>
		<pubDate>Fri, 01 Sep 2017 21:17:58 +0000</pubDate>
				<category><![CDATA[Product]]></category>
		<category><![CDATA[ROBOTICS/MAE]]></category>
		<guid isPermaLink="false">https://www.doyoki.com/?p=619</guid>

					<description><![CDATA[Human Muscle interpretation in robotic arm mechanism. Motorized device has a limited mimicry on human/animal muscle movement. Being precise to satisfy industrial demands has allowed robots to replace human labor. However, it lost a sense of human mimicry in natural movement. Just like musicians avoid exact tempo of the instruments at the MIDI software to [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" decoding="async" class="wp-image-620 aligncenter" src="https://www.doyoki.com/wp-content/uploads/2016/02/presentation-595x476.png" alt="presentation" width="893" height="714" /></p>
<p>Human Muscle interpretation in robotic arm mechanism.</p>
<p>Motorized device has a limited mimicry on human/animal muscle movement. Being precise to satisfy industrial demands has allowed robots to replace human labor.</p>
<p>However, it lost a sense of human mimicry in natural movement. Just like musicians avoid exact tempo of the instruments at the MIDI software to make more humane sound,  the movement of the robot hand mocks the human natural motion.</p>
<p>Implementing humanness on the robotic hand is based on the long time ago animal experiment. From this well known experiment on the dead body of the frog,  we can see animal muscle is triggered by electrons.</p>
<p>Making frogs&#8217; leg muscle popping by connecting electricity clearly shows disgust of the combination of the nature and the artificial implant. However, from the scientific sense of  the experiment, it allowed me to make this human-like robot arm.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-1521" src="https://www.doyoki.com/wp-content/uploads/2015/09/complete-595x506.jpg" alt="" width="804" height="683" srcset="https://www.doyoki.com/wp-content/uploads/2015/09/complete-595x506.jpg 595w, https://www.doyoki.com/wp-content/uploads/2015/09/complete-480x408.jpg 480w, https://www.doyoki.com/wp-content/uploads/2015/09/complete-768x653.jpg 768w, https://www.doyoki.com/wp-content/uploads/2015/09/complete-960x817.jpg 960w, https://www.doyoki.com/wp-content/uploads/2015/09/complete.jpg 1200w" sizes="auto, (max-width: 804px) 100vw, 804px" /></p>
<p>Instead of digital input to analog output, here both are analog reading and analog output. Magnet was used to convey natural human instability in human motion. Instability means imperfection which is a contrary to industrial robots with G-code.</p>
<p>As one can see, the servo motor is shaking as the magnetic flux sensor is approaching to the magnet.  Just like when human limbs are responding to the objects ahead of touching them directly.</p>
<p>The exploration of this human mimicry will lower the barrier of the denial against artificial robots.</p>
<p>&nbsp;</p>
<p><iframe loading="lazy" src="https://www.youtube.com/embed/3yon_pmzgTo" width="800" height="600" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p><iframe loading="lazy" src="https://www.youtube.com/embed/9nvLbb9WgFo" width="800" height="600" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
