You are not logged in.

Dear visitor, welcome to krpano.com Forum. If this is your first visit here, please read the Help. It explains in detail how this page works. To use all features of this page, you should consider registering. Please use the registration form, to register here or read more information about the registration process. If you are already registered, please login here.

21

Sunday, July 10th 2016, 4:19pm

something like this:
ZOOMBLEND(time, zoom, tweentype, direction)

where direction is angle 0-359. 0 is forward and identical to classic zoomblend

Posts: 1,082

Location: Russia, Kaliningrad

  • Send private message

22

Monday, July 11th 2016, 2:03pm

Hi!
there is set of blend styles
http://krpano.com/docu/actions/#loadpano.blend
so you can define which one is better for you, simple change it in loadscene action
you can detect what is the movement and set it dynamically as parameter

Hope it helps
Regards
Andrey
VRAP - desktop VR content player based on krpano.
Common tasks in one place in one click! Discussion thread
DOWNLOAD for MAC
DOWNLOAD for WIN

23

Monday, July 11th 2016, 3:20pm

There is some misunderstand I think

there is no transitions like google street view, like matterport or like this:
https://www.youtube.com/watch?v=CQ-cwJ_UD6Q&feature=youtu.be
based on https://github.com/henryseg/spherical_image_editing

everithing that krpano have is "slide show" effect and do not support direction of the movement in the tours

This post has been edited 1 times, last edit by "sblack" (Jul 11th 2016, 4:24pm)


Posts: 1,082

Location: Russia, Kaliningrad

  • Send private message

24

Monday, July 11th 2016, 6:11pm

Yep, you are right, krpano has no such effect
VRAP - desktop VR content player based on krpano.
Common tasks in one place in one click! Discussion thread
DOWNLOAD for MAC
DOWNLOAD for WIN

26

Wednesday, February 26th 2020, 2:05pm

use contextcapture to make 3d model is easy

Nice if we could do something similar in krpano.
Theoretically such would be possible of course ;-) - but a 3d-geometry (or a depth-map) for the pano would be required. The question is where to get or how to build that geometry? A normal pano image alone doesn't contain/provide that information.

Best regards,
Klaus


Hi,
Klaus,make 3d model is easy for now,after shoot panorama,you can use contextcapture,this software can make photos convert into 3d model.Then use this model and panorama make "matterport effect",and another woderful case is geocv,here link: https://geocv.com/americancopper31c
Geocv use samsung s8 smartphone and a Structure Sensor make 3d model,but in reality,we don't need a Structure Sensor,just a phone is done.
And i make a indoor 3d model with my iphone 11,i'll send model and panorama to you for test,have a nice day!



simplify has attached the following images:
  • 3.png
  • 4.png

Posts: 1

Location: Lille - France

Occupation: 360° Photographer

  • Send private message

27

Sunday, April 12th 2020, 2:18pm

Matterport is OK for Real Estate but bad for marketing

What I don't like about Matterport is that it's a proprietary solution. The other point is that all the visits are the same and that only the prices are taken into account. The result is a red ocean for photographers who get started with this technology.


The other aspect is that the 360 ​​° real estate market is extremely weak in France and certainly in Europe.


For me the only way to add value to the virtual visit is to make enriched visits with multimedia. Tools like KR Pano and other non-proprietary systems are more flexible.



Personally it is essentially the Google Street View platform and the enrichment's on the Street View virtual tours that allow me to make a living.



If you want to know more you can visit my web site : https://visite360pro.com/

28

Friday, April 24th 2020, 9:56am

Some years ago I've looked into matterport, so I cant say my finding from back then still holds true, but back then at least I concluded it was a mere photogrammetric approach. Images are uploaded into the cloud, pairs are found, features are extracted, their relative movement to each other is determined and voila, you get the distance of each feature to the viewport.