Jump to content

path to increasing stock $$ for some via future AI services


Recommended Posts

for contribs that are already netting 20 cents or more per year per online image;
(your 2022 total stock net income / your total number of stock images end of 2022)
requires reasonable results from AI processing + AI tagging services;
assumes service cost of 5 cents per image AI processing, 5 cents per image AI tagging;
assumes minimal follow-up work required by contrib;
if Adobe or other services offer AI processing-tagging direct to contrib,
then no pre-existing net$$/image/yr online needed...
assumes significant time previously required to process-tag
will then be spent shooting many more new salable images =
higher higher volume = higher higher money in pocket.
(if you don't get post, read again, if you still don't understand, read again, if you...)
Edited by Jeffrey Isaac Greenberg
Link to comment
Share on other sites

There is a stock library I uploaded some years ago, who are tagging with AI and I experienced ridiculous results. My image of a bicycle wheel was tagged automatically as an animal. See attachment.screen-02-08-2019.jpg

  • Love 1
Link to comment
Share on other sites

👍 thanks for responses
but you both are talking about current or recent past;
I'm talking about future improved versions
in which AI system samples, say, 1000-5000 of your
processed-by-you & tagged-by-you images
so AI results are SPECIFIC & REASONABLY ACCURATE
to your way of tagging-processing...
Edited by Jeffrey Isaac Greenberg
Link to comment
Share on other sites

AI has to have enough samples to know what something is, or what family/type of thing it's close to.  How will AI have enough samples of say, deaf kids using Nicaraguan Sign Language and not some other sign language, and not printed or painted signs advertising something (I have the only photos of deaf kids using Nicaraguan Sign Language on Alamy).  Two licenses, for high $$ to low $$$.   Humans who didn't Google Nicaraguan Sign Language put the tag in on utter irrelevant photos.  It grates me the way labeling a small Buteo an Eagle does.  The AI would have to be smarter than a number of humans to get this right since a number of humans seem not to get it right.

 

I've found the easiest way to do captions and tagging was to upload sensible numbers daily or so, then navigate from magnified image to magnified image, using the check mark for selected or not at the bottom the image.  Have Wikipedia open as a tab. 

 

 

Link to comment
Share on other sites

20 hours ago, Rebecca Ore said:

How will AI have enough samples of say, deaf kids using Nicaraguan Sign Language...

dismissing future acceptable AI processing-tagging because of
an imagined one-off problematic result is like a country's leader
dismissing
all immigration because of one problematic immigrant, IMO;
in future acceptable AI tagging offered as purchased program
or via subscription, one may be advised to submit batches
from same shoot-subject so AI could perceive sign language
vs. hand waving; also user might have option to input repeating
tags for all images in batch, e.g., "Managua Nicaragua Nicaraguan"
so although not a phrase, {Nicaraguan sign language] is there !!
Edited by Jeffrey Isaac Greenberg
Link to comment
Share on other sites

15 hours ago, Jeffrey Isaac Greenberg said:
dismissing future acceptable AI processing-tagging because of
an imaged one-off problematic result is like a country's leader
dismissing
all immigration because of one problematic immigrant, IMO;
in future acceptable AI tagging offered as purchased program
or via subscription, one may be advised to submit batches
from same shoot-subject so AI could perceive sign language
vs. hand waving; also user might have option to input repeating
tags for all images in batch, e.g., "Managua Nicaragua Nicaraguan"
so although not a phrase, {Nicaraguan sign language] is there !!

 

Basically, if bog standard humans often get this so badly, the cost of a good reliable program is going to be really up there, and as with a lot of things, will be about selling the thing more than anything else.   Also, I'm  75 and it's as likely that I would make more money doing more novels than doing more photography on my bad knees.  The searches for my photos have been relatively simple.   One was "cichlid pair", not even species.   A third to a half of the photos of Jinotega are mine.  

 

I see tons of saltwater parrot fish keyworded as "cichlids."

 

Alamy needs to fix the messy key-wording that's there already.   If AIs can do that, more power to them. 

 

My problem is taking better photographs, not doing indexing as I've done interactive indexing for CD-Roms of medical texts at one point in my life.  Alamy's problem and any AI's problem is that with each of us training our own AIs on our previous keywording, the individual AI ends up with baked in problems.  It may solve your problem; it won't solve Alamy's problem of "Nicaraguan sign language" being a bunch of painted words, or smaller buteos being transformed into eagles or merlins and kestrels being magically turned into peregrine falcons.

 

A few years back, someone mislabeled a parrot, which the client published as a very rare parrot, but found from readers of ornithology books that it wasn't.  These kinds of mistakes make Alamy look bad.   Alamy wants to know what holes it needs to fill.   Someone who knows a field intimately is likely to have contacts for getting access.  Someone who is spraying masses of keywords, probably not.   Giving results that aren't what the person was looking for, then Alamy needs to know or have photographers who know what the search was really for.

Edited by Rebecca Ore
Peregrine for the larger falcon.
Link to comment
Share on other sites

On 13/07/2023 at 21:54, Jeffrey Isaac Greenberg said:
👍 thanks for responses
but you both are talking about current or recent past;
I'm talking about future improved versions
in which AI system samples, say, 1000-5000 of your
processed-by-you & tagged-by-you images
so AI results are SPECIFIC & REASONABLY ACCURATE
to your way of tagging-processing...

WRT AI tagging - I think you're being overly optimistic. I fear the quality of AI tagged results is already starting to degrade as the training data becomes progressively contaminated with AIs earlier mistakes. We are entering an era where what's fake (or AI generated or tagged) and what's real (validated and true) is becoming more and more difficult to determine. So how do AI companies ensure the validity of their tagged image training data? Garbage in => garbage out. I agree that if specific validated data is used for training and the images to be AI keyworded contain similar subject matter the results will be better. But it probably needs way more than 5,000 images as AI is really just a statistical based process. AI generated location tags based on image EXIF GPS data could work well though, especially for your travel imagery.

 

WRT image processing - I think AI adjustment to suit your own processing style could work well, but results will still need checking.

 

Mark

Edited by M.Chapman
Link to comment
Share on other sites

Might it be possible to offer many thousands of RAW files with processed JPEGS and an AI system learn how an individual tends to process images?

 

But on reflection this is already happening with one's images compared with AI determined pre-sets presumably based on millions of images.

 

1) I'm using DXO Pro at this very moment to process my recent images - it gives me no control ( no sliders or anything) but I now trust it to as my first stage of image processiing.

2) Then for each image I will us the 'Auto' function in ACR which most times gives an immediate improvement from which to work. Again no control of the Auto results but could my subsequent tweaking be learned by the AI system with ACR?

 

So, I am already using AI. And I am willing to do incrementally use more and more of it. This is how it will develop in stages not a sudden jump.

 

Perhaps more importnatly is how Alamy and other large collections can use AI to learn what customers 'mean' when they use search terms - which search terms lead to zooms and sales matched with specific images.

Edited by geogphotos
Link to comment
Share on other sites

27 minutes ago, Harry Harrison said:

The enticing new combination of AI keywording and AI searching, will Alamy "boldly go where no one has gone before"?  Where will the Alamy ranking fit in with all this I wonder. 

 

 

 

Exactly, it could be a paradigm shift for the agency that makes best use of AI for both its contributor and buyers. 

Link to comment
Share on other sites

28 minutes ago, geogphotos said:

Exactly, it could be a paradigm shift for the agency that makes best use of AI for both its contributor and buyers. 

I suppose the Alamy algorithm is a kind of primitive AI, alright it's not thinking for itself but it is throwing up images according to a set of opaque rules. Personally I don't think it's necessarily offering up the best images for any given search and Alamy clearly needs to do that if they want to compete with agencies that succeed in doing so.

 

When the new collections came out I did wonder if AI was being used to move images into 'Vital' and 'Ultimate', even if it isn't yet it could be used for that I imagine. 

 

I don't want you to think that I'm an AI zealot though, a couple of months ago I had some payments fraudulently taken from my Amazon account and after many hurdles I did manage to speak to a person. Or did I? I actually had to ask her if she was a real person and I'm still not sure. 

 



 

Edited by Harry Harrison
Link to comment
Share on other sites

If AI makes as good a job of decoding images as it does of decoding speech for YouTube subtitles, I'm not planning to change my workflow any time soon.

 

Alan

 

  • Like 2
Link to comment
Share on other sites

5 minutes ago, Inchiquin said:

If AI makes as good a job of decoding images as it does of decoding speech for YouTube subtitles, I'm not planning to change my workflow any time soon.

 

Alan

 

 

Exactly. There's a lot of overhype over AI. It's not really intelligent. If it is fed rubbish data, it gives rubbish answers. And there's a lot of awful keywording out there. 

 

Yes, it may be better some day. And there will be self driving cars some day. And anti-aging creams may actually work one day...

  • Like 1
Link to comment
Share on other sites

12 hours ago, Jeffrey Isaac Greenberg said:
m talking about future improved versions
in which AI system samples, say, 1000-5000 of your
processed-by-you & tagged-by-you images
so AI results are SPECIFIC & REASONABLY ACCURATE
to your way of tagging-processing...

 

Sounds great. Would certainly use it. Assuming I'm good at keywording already... (I calculated for last year, 22.5 cents apparently per image).

Link to comment
Share on other sites

all I'm saying is this
and what it is, is this:
AI tagging-processing will reduce
time spent by contrib doing it...
that time can be converted to
shooting time producing more
salable images to be AI tagged-processed
which will increase net$$
AND LIKE A SNOWBALL ROLLING DOWN A HILL
THE ANNUAL NET$$ GETS
BIGGER 🤑
AND BIGGER 🤑 🤑
AND BIGGER
🤑 🤑 🤑
remind me on 1/1/26 that it hasn't happened yet...
look at the "miracle" of instant content aware replacement
through the eyes of a 1995 trannie stock shooter...
look at all those 2003 stock shooters who said digital
quality would NEVER equal film quality....
Link to comment
Share on other sites

1 hour ago, Jeffrey Isaac Greenberg said:
all I'm saying is this
and what it is, is this:
AI tagging-processing will reduce
time spent by contrib doing it...
that time can be converted to
shooting time producing more
salable images to be AI tagged-processed
which will increase net$$
AND LIKE A SNOWBALL ROLLING DOWN A HILL
THE ANNUAL NET$$ GETS
BIGGER 🤑
AND BIGGER 🤑 🤑
AND BIGGER
🤑 🤑 🤑
remind me on 1/1/26 that it hasn't happened yet...
look at the "miracle" of instant content aware replacement
through the eyes of a 1995 trannie stock shooter...
look at all those 2003 stock shooters who said digital
quality would NEVER equal film quality....

 

Human perceptions program these things -- it's like a playback of your own head if you train the AI on your own key-wording and images.   Figuring out what an image is of requires not having lots of mislabeled things in the image data base.  Alamy's current presenting problem is a lot of mislabeled things in their image database.  Key-wording and captioning isn't MY bottleneck.  Taking and editing is.

 

I had people telling me digital was as good as film at a time when it wasn't.  Got my first digital DSLR in 2005 and yeah, by then or even with the Olympus digital that a thief stole it was.  The annoying thing was people claiming it was as good as when it wasn't.  The other thing is I'm old enough to remember that errors got baked into systems, especially with data entry done by scanning or by hand.  Fixing computer programming errors is still non-trivial.  And OCR isn't perfect yet -- I've been asked to do a third copy edit before the end of the month.  And that's been a technology that's been in use since the 1980s.  Mostly, it gets things right, but never always, and not as good as a highly skilled typesetter, just way way cheaper.

 

 

Link to comment
Share on other sites

22 hours ago, Steve F said:

 

Exactly. There's a lot of overhype over AI. It's not really intelligent. If it is fed rubbish data, it gives rubbish answers. And there's a lot of awful keywording out there. 

 

Yes, it may be better some day. And there will be self driving cars some day. And anti-aging creams may actually work one day...

 

And I'm absolutely sure I saw a pig in the sky the other day. Oh, wait a minute, I was listening to Pink Floyd's "Animals."

 

Alan

  • Love 1
  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

I remember with pain the scanning service in Poland who included keywording as part of the package. The boss who was English stated that his operatives were instructed to just describe what they saw in the photo. The results were butt grindingly awful! It was a free trial of about a dozen images. Could Ai do much better? I doubt it  

Link to comment
Share on other sites

On 29/07/2023 at 01:12, Nodvandigtid said:

Mmm... not impressed. At this rate Alamy's requirement to manually select Supertags (even on submitted images with AI generated tags in their metadata) may help preserve some quality in the search results? Perhaps Alamy should give an even higher weighting to Supertags to offset the tsunami of images being uploaded with lousy and spammy AI generated tags?

 

Mark

Edited by M.Chapman
Link to comment
Share on other sites

14 hours ago, M.Chapman said:

Mmm... not impressed. At this rate Alamy's requirement to manually select Supertags (even on submitted images with AI generated tags in their metadata) may help preserve some quality in the search results? Perhaps Alamy should give an even higher weighting to Supertags to offset the tsunami of images being uploaded with lousy and spammy AI generated tags?

 

Mark

 

 

I would suggest a higher weighting to the caption field because that is where the most essential information goes. 

 

I must admit to not being a fan of Supertags because of its optionality and uniqueness to Alamy.

Edited by geogphotos
Link to comment
Share on other sites

Unfortunately, I don't think that AI is up to the job. It is able to identify subject matter at only the most basic, generic level. That might be all that is needed for microstock ( green bug on leaf) but no good for Alamy and 'proper' editorial stock photography which needs Latin names of species, exact location, specific information.  

 

Logically, with an ever growing amount of competing images it would be wise to slimdown the keywords/tags to only the most relevant, core ones.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.